Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. We're doing ourselves a massive disservice by having an "no-AI no matter what" attitude.

We're doing ourselves a massive disservice by having an "no-AI no matter what" attitude.

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
noaillmsciencepublications
8 Indlæg 4 Posters 0 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • budududuroiu@hachyderm.ioB This user is from outside of this forum
    budududuroiu@hachyderm.ioB This user is from outside of this forum
    budududuroiu@hachyderm.io
    wrote sidst redigeret af
    #1

    We're doing ourselves a massive disservice by having an "no-AI no matter what" attitude.

    LLMs and other Deep Learning models are doing advancements in science every other day. For a group of people that usually embraces scientific progress (Mastodon), I find the community here very dogmatic and against science if it comes from AI-assisted work.

    I get it, there's a lot of slop that flooded the academic community. There also were big issues with the scientific community before AI: lack of funding for replication studies, publication bias, the file drawer problem, where studies which fail to reject the null hypothesis are less likely to be published than those that do produce a statistically significant result.

    Oh the other hand, AlphaFold is pushing the boundaries of protein folding, MatterGen is producing many DFT-stable, database-novel crystal structures, LLMs continue to solve Erdos problems.

    And there are open alternatives we should champion: OpenFold is attempting a permissive replication of AlphaFold 2, for example.

    "But muh AI is fascism" - very Ameri-brained take, decentre yourself and fix your country, you can't lock the entire world out of scientific progress because you live in a failed state.

    "But muh stealing work" - you're literally arguing JSTOR and Elsevier's point against Aaron Swarz

    "But muh energy use" - shunning indie Devs that try to improve AI efficiency basically guarantees capture of AI by big players that can afford to just scale more instead of invest in better, post-transformer architecture.

    I see the cynical view of AI displayed here as masking intellectual laziness, and I think it's frankly dishonest to our communities (that, if we want to still consider ourselves champions of scientific progress).

    #noai #ai #llm #science #publications

    budududuroiu@hachyderm.ioB anxiousmac@mstdn.socialA bongolian@mstdn.socialB pelle@veganism.socialP 4 Replies Last reply
    0
    • budududuroiu@hachyderm.ioB budududuroiu@hachyderm.io

      We're doing ourselves a massive disservice by having an "no-AI no matter what" attitude.

      LLMs and other Deep Learning models are doing advancements in science every other day. For a group of people that usually embraces scientific progress (Mastodon), I find the community here very dogmatic and against science if it comes from AI-assisted work.

      I get it, there's a lot of slop that flooded the academic community. There also were big issues with the scientific community before AI: lack of funding for replication studies, publication bias, the file drawer problem, where studies which fail to reject the null hypothesis are less likely to be published than those that do produce a statistically significant result.

      Oh the other hand, AlphaFold is pushing the boundaries of protein folding, MatterGen is producing many DFT-stable, database-novel crystal structures, LLMs continue to solve Erdos problems.

      And there are open alternatives we should champion: OpenFold is attempting a permissive replication of AlphaFold 2, for example.

      "But muh AI is fascism" - very Ameri-brained take, decentre yourself and fix your country, you can't lock the entire world out of scientific progress because you live in a failed state.

      "But muh stealing work" - you're literally arguing JSTOR and Elsevier's point against Aaron Swarz

      "But muh energy use" - shunning indie Devs that try to improve AI efficiency basically guarantees capture of AI by big players that can afford to just scale more instead of invest in better, post-transformer architecture.

      I see the cynical view of AI displayed here as masking intellectual laziness, and I think it's frankly dishonest to our communities (that, if we want to still consider ourselves champions of scientific progress).

      #noai #ai #llm #science #publications

      budududuroiu@hachyderm.ioB This user is from outside of this forum
      budududuroiu@hachyderm.ioB This user is from outside of this forum
      budududuroiu@hachyderm.io
      wrote sidst redigeret af
      #2

      Sources:

      - https://www.nature.com/articles/s41586-021-04086-x

      - https://www.nature.com/articles/s41586-023-06004-9

      - https://www.nature.com/articles/s41586-022-05172-4

      - https://www.nature.com/articles/s41586-025-08628-5

      1 Reply Last reply
      0
      • budududuroiu@hachyderm.ioB budududuroiu@hachyderm.io

        We're doing ourselves a massive disservice by having an "no-AI no matter what" attitude.

        LLMs and other Deep Learning models are doing advancements in science every other day. For a group of people that usually embraces scientific progress (Mastodon), I find the community here very dogmatic and against science if it comes from AI-assisted work.

        I get it, there's a lot of slop that flooded the academic community. There also were big issues with the scientific community before AI: lack of funding for replication studies, publication bias, the file drawer problem, where studies which fail to reject the null hypothesis are less likely to be published than those that do produce a statistically significant result.

        Oh the other hand, AlphaFold is pushing the boundaries of protein folding, MatterGen is producing many DFT-stable, database-novel crystal structures, LLMs continue to solve Erdos problems.

        And there are open alternatives we should champion: OpenFold is attempting a permissive replication of AlphaFold 2, for example.

        "But muh AI is fascism" - very Ameri-brained take, decentre yourself and fix your country, you can't lock the entire world out of scientific progress because you live in a failed state.

        "But muh stealing work" - you're literally arguing JSTOR and Elsevier's point against Aaron Swarz

        "But muh energy use" - shunning indie Devs that try to improve AI efficiency basically guarantees capture of AI by big players that can afford to just scale more instead of invest in better, post-transformer architecture.

        I see the cynical view of AI displayed here as masking intellectual laziness, and I think it's frankly dishonest to our communities (that, if we want to still consider ourselves champions of scientific progress).

        #noai #ai #llm #science #publications

        anxiousmac@mstdn.socialA This user is from outside of this forum
        anxiousmac@mstdn.socialA This user is from outside of this forum
        anxiousmac@mstdn.social
        wrote sidst redigeret af
        #3

        @budududuroiu I value factual accuracy. LLMs seem to be incapable of guaranteeing that by design. Isn't your argument similar to "Electricity is capable of doing many great things, hence we should use it in unsafe circumstances and leave live terminals exposed?"

        budududuroiu@hachyderm.ioB 1 Reply Last reply
        0
        • anxiousmac@mstdn.socialA anxiousmac@mstdn.social

          @budududuroiu I value factual accuracy. LLMs seem to be incapable of guaranteeing that by design. Isn't your argument similar to "Electricity is capable of doing many great things, hence we should use it in unsafe circumstances and leave live terminals exposed?"

          budududuroiu@hachyderm.ioB This user is from outside of this forum
          budududuroiu@hachyderm.ioB This user is from outside of this forum
          budududuroiu@hachyderm.io
          wrote sidst redigeret af
          #4

          @anxiousmac verification is asymmetrically easier than discovery, plenty of problems we have yet to solve would be trivial to verify once we have a solution.

          An example from software, you can use formal proofs to deterministically prove correctness in AI-generated code.

          https://arxiv.org/abs/2507.13290

          Here is a repository of partially or fully AI-solved Erdos problems:
          https://github.com/teorth/erdosproblems/wiki/AI-contributions-to-Erd%C5%91s-problems

          Your electricity analogy makes no sense btw

          1 Reply Last reply
          0
          • budududuroiu@hachyderm.ioB budududuroiu@hachyderm.io

            We're doing ourselves a massive disservice by having an "no-AI no matter what" attitude.

            LLMs and other Deep Learning models are doing advancements in science every other day. For a group of people that usually embraces scientific progress (Mastodon), I find the community here very dogmatic and against science if it comes from AI-assisted work.

            I get it, there's a lot of slop that flooded the academic community. There also were big issues with the scientific community before AI: lack of funding for replication studies, publication bias, the file drawer problem, where studies which fail to reject the null hypothesis are less likely to be published than those that do produce a statistically significant result.

            Oh the other hand, AlphaFold is pushing the boundaries of protein folding, MatterGen is producing many DFT-stable, database-novel crystal structures, LLMs continue to solve Erdos problems.

            And there are open alternatives we should champion: OpenFold is attempting a permissive replication of AlphaFold 2, for example.

            "But muh AI is fascism" - very Ameri-brained take, decentre yourself and fix your country, you can't lock the entire world out of scientific progress because you live in a failed state.

            "But muh stealing work" - you're literally arguing JSTOR and Elsevier's point against Aaron Swarz

            "But muh energy use" - shunning indie Devs that try to improve AI efficiency basically guarantees capture of AI by big players that can afford to just scale more instead of invest in better, post-transformer architecture.

            I see the cynical view of AI displayed here as masking intellectual laziness, and I think it's frankly dishonest to our communities (that, if we want to still consider ourselves champions of scientific progress).

            #noai #ai #llm #science #publications

            bongolian@mstdn.socialB This user is from outside of this forum
            bongolian@mstdn.socialB This user is from outside of this forum
            bongolian@mstdn.social
            wrote sidst redigeret af
            #5

            @budududuroiu Part of the problem is that different types of AI that have different functions (generally, predictive AI and generative AI) get lumped together. The generative AI proponents almost always lump these together in propagandistic or fantistical ways, e.g., "AI will solve global warming" (without any details) as an argument for building massive energy-using and water-wasting data centers.

            budududuroiu@hachyderm.ioB 1 Reply Last reply
            0
            • bongolian@mstdn.socialB bongolian@mstdn.social

              @budududuroiu Part of the problem is that different types of AI that have different functions (generally, predictive AI and generative AI) get lumped together. The generative AI proponents almost always lump these together in propagandistic or fantistical ways, e.g., "AI will solve global warming" (without any details) as an argument for building massive energy-using and water-wasting data centers.

              budududuroiu@hachyderm.ioB This user is from outside of this forum
              budududuroiu@hachyderm.ioB This user is from outside of this forum
              budududuroiu@hachyderm.io
              wrote sidst redigeret af
              #6

              @Bongolian Well, they're right, AI _will_ solve global warming, because elites can effectively shelter in cooler climate temperatures, make use of AI & robotics for the labour they need to sustain their lives, and let vast amounts of humanity die due to various factors to do with climate. Due to the massive die-off of humans, emissions will probably decrease.

              My argument is, the conversation is atm steered by AI hype and elites that go "uhm.... uhm.... depends" when questioned if they'd want humanity to survive, because the other side (Mastodon, etc.) refuses to engage with probably the most consequential invention in human existence.

              bongolian@mstdn.socialB 1 Reply Last reply
              0
              • budududuroiu@hachyderm.ioB budududuroiu@hachyderm.io

                @Bongolian Well, they're right, AI _will_ solve global warming, because elites can effectively shelter in cooler climate temperatures, make use of AI & robotics for the labour they need to sustain their lives, and let vast amounts of humanity die due to various factors to do with climate. Due to the massive die-off of humans, emissions will probably decrease.

                My argument is, the conversation is atm steered by AI hype and elites that go "uhm.... uhm.... depends" when questioned if they'd want humanity to survive, because the other side (Mastodon, etc.) refuses to engage with probably the most consequential invention in human existence.

                bongolian@mstdn.socialB This user is from outside of this forum
                bongolian@mstdn.socialB This user is from outside of this forum
                bongolian@mstdn.social
                wrote sidst redigeret af
                #7

                @budududuroiu "most consequential invention in human existence" is hubris. Arguably, the inventions of vaccines, antibiotics, and water sanitation were each far more consequential than generative AI ever will be.

                1 Reply Last reply
                0
                • budududuroiu@hachyderm.ioB budududuroiu@hachyderm.io

                  We're doing ourselves a massive disservice by having an "no-AI no matter what" attitude.

                  LLMs and other Deep Learning models are doing advancements in science every other day. For a group of people that usually embraces scientific progress (Mastodon), I find the community here very dogmatic and against science if it comes from AI-assisted work.

                  I get it, there's a lot of slop that flooded the academic community. There also were big issues with the scientific community before AI: lack of funding for replication studies, publication bias, the file drawer problem, where studies which fail to reject the null hypothesis are less likely to be published than those that do produce a statistically significant result.

                  Oh the other hand, AlphaFold is pushing the boundaries of protein folding, MatterGen is producing many DFT-stable, database-novel crystal structures, LLMs continue to solve Erdos problems.

                  And there are open alternatives we should champion: OpenFold is attempting a permissive replication of AlphaFold 2, for example.

                  "But muh AI is fascism" - very Ameri-brained take, decentre yourself and fix your country, you can't lock the entire world out of scientific progress because you live in a failed state.

                  "But muh stealing work" - you're literally arguing JSTOR and Elsevier's point against Aaron Swarz

                  "But muh energy use" - shunning indie Devs that try to improve AI efficiency basically guarantees capture of AI by big players that can afford to just scale more instead of invest in better, post-transformer architecture.

                  I see the cynical view of AI displayed here as masking intellectual laziness, and I think it's frankly dishonest to our communities (that, if we want to still consider ourselves champions of scientific progress).

                  #noai #ai #llm #science #publications

                  pelle@veganism.socialP This user is from outside of this forum
                  pelle@veganism.socialP This user is from outside of this forum
                  pelle@veganism.social
                  wrote sidst redigeret af
                  #8

                  @budududuroiu
                  using the #noAI tag for this take is basically like using the #vegan tag tell everyone how good you think bacon is

                  1 Reply Last reply
                  0
                  Svar
                  • Svar som emne
                  Login for at svare
                  • Ældste til nyeste
                  • Nyeste til ældste
                  • Most Votes


                  • Log ind

                  • Har du ikke en konto? Tilmeld

                  • Login or register to search.
                  Powered by NodeBB Contributors
                  Graciously hosted by data.coop
                  • First post
                    Last post
                  0
                  • Hjem
                  • Seneste
                  • Etiketter
                  • Populære
                  • Verden
                  • Bruger
                  • Grupper