We're doing ourselves a massive disservice by having an "no-AI no matter what" attitude.
-
We're doing ourselves a massive disservice by having an "no-AI no matter what" attitude.
LLMs and other Deep Learning models are doing advancements in science every other day. For a group of people that usually embraces scientific progress (Mastodon), I find the community here very dogmatic and against science if it comes from AI-assisted work.
I get it, there's a lot of slop that flooded the academic community. There also were big issues with the scientific community before AI: lack of funding for replication studies, publication bias, the file drawer problem, where studies which fail to reject the null hypothesis are less likely to be published than those that do produce a statistically significant result.
Oh the other hand, AlphaFold is pushing the boundaries of protein folding, MatterGen is producing many DFT-stable, database-novel crystal structures, LLMs continue to solve Erdos problems.
And there are open alternatives we should champion: OpenFold is attempting a permissive replication of AlphaFold 2, for example.
"But muh AI is fascism" - very Ameri-brained take, decentre yourself and fix your country, you can't lock the entire world out of scientific progress because you live in a failed state.
"But muh stealing work" - you're literally arguing JSTOR and Elsevier's point against Aaron Swarz
"But muh energy use" - shunning indie Devs that try to improve AI efficiency basically guarantees capture of AI by big players that can afford to just scale more instead of invest in better, post-transformer architecture.
I see the cynical view of AI displayed here as masking intellectual laziness, and I think it's frankly dishonest to our communities (that, if we want to still consider ourselves champions of scientific progress).
-
We're doing ourselves a massive disservice by having an "no-AI no matter what" attitude.
LLMs and other Deep Learning models are doing advancements in science every other day. For a group of people that usually embraces scientific progress (Mastodon), I find the community here very dogmatic and against science if it comes from AI-assisted work.
I get it, there's a lot of slop that flooded the academic community. There also were big issues with the scientific community before AI: lack of funding for replication studies, publication bias, the file drawer problem, where studies which fail to reject the null hypothesis are less likely to be published than those that do produce a statistically significant result.
Oh the other hand, AlphaFold is pushing the boundaries of protein folding, MatterGen is producing many DFT-stable, database-novel crystal structures, LLMs continue to solve Erdos problems.
And there are open alternatives we should champion: OpenFold is attempting a permissive replication of AlphaFold 2, for example.
"But muh AI is fascism" - very Ameri-brained take, decentre yourself and fix your country, you can't lock the entire world out of scientific progress because you live in a failed state.
"But muh stealing work" - you're literally arguing JSTOR and Elsevier's point against Aaron Swarz
"But muh energy use" - shunning indie Devs that try to improve AI efficiency basically guarantees capture of AI by big players that can afford to just scale more instead of invest in better, post-transformer architecture.
I see the cynical view of AI displayed here as masking intellectual laziness, and I think it's frankly dishonest to our communities (that, if we want to still consider ourselves champions of scientific progress).
-
We're doing ourselves a massive disservice by having an "no-AI no matter what" attitude.
LLMs and other Deep Learning models are doing advancements in science every other day. For a group of people that usually embraces scientific progress (Mastodon), I find the community here very dogmatic and against science if it comes from AI-assisted work.
I get it, there's a lot of slop that flooded the academic community. There also were big issues with the scientific community before AI: lack of funding for replication studies, publication bias, the file drawer problem, where studies which fail to reject the null hypothesis are less likely to be published than those that do produce a statistically significant result.
Oh the other hand, AlphaFold is pushing the boundaries of protein folding, MatterGen is producing many DFT-stable, database-novel crystal structures, LLMs continue to solve Erdos problems.
And there are open alternatives we should champion: OpenFold is attempting a permissive replication of AlphaFold 2, for example.
"But muh AI is fascism" - very Ameri-brained take, decentre yourself and fix your country, you can't lock the entire world out of scientific progress because you live in a failed state.
"But muh stealing work" - you're literally arguing JSTOR and Elsevier's point against Aaron Swarz
"But muh energy use" - shunning indie Devs that try to improve AI efficiency basically guarantees capture of AI by big players that can afford to just scale more instead of invest in better, post-transformer architecture.
I see the cynical view of AI displayed here as masking intellectual laziness, and I think it's frankly dishonest to our communities (that, if we want to still consider ourselves champions of scientific progress).
@budududuroiu I value factual accuracy. LLMs seem to be incapable of guaranteeing that by design. Isn't your argument similar to "Electricity is capable of doing many great things, hence we should use it in unsafe circumstances and leave live terminals exposed?"
-
@budududuroiu I value factual accuracy. LLMs seem to be incapable of guaranteeing that by design. Isn't your argument similar to "Electricity is capable of doing many great things, hence we should use it in unsafe circumstances and leave live terminals exposed?"
@anxiousmac verification is asymmetrically easier than discovery, plenty of problems we have yet to solve would be trivial to verify once we have a solution.
An example from software, you can use formal proofs to deterministically prove correctness in AI-generated code.
https://arxiv.org/abs/2507.13290
Here is a repository of partially or fully AI-solved Erdos problems:
https://github.com/teorth/erdosproblems/wiki/AI-contributions-to-Erd%C5%91s-problemsYour electricity analogy makes no sense btw
-
We're doing ourselves a massive disservice by having an "no-AI no matter what" attitude.
LLMs and other Deep Learning models are doing advancements in science every other day. For a group of people that usually embraces scientific progress (Mastodon), I find the community here very dogmatic and against science if it comes from AI-assisted work.
I get it, there's a lot of slop that flooded the academic community. There also were big issues with the scientific community before AI: lack of funding for replication studies, publication bias, the file drawer problem, where studies which fail to reject the null hypothesis are less likely to be published than those that do produce a statistically significant result.
Oh the other hand, AlphaFold is pushing the boundaries of protein folding, MatterGen is producing many DFT-stable, database-novel crystal structures, LLMs continue to solve Erdos problems.
And there are open alternatives we should champion: OpenFold is attempting a permissive replication of AlphaFold 2, for example.
"But muh AI is fascism" - very Ameri-brained take, decentre yourself and fix your country, you can't lock the entire world out of scientific progress because you live in a failed state.
"But muh stealing work" - you're literally arguing JSTOR and Elsevier's point against Aaron Swarz
"But muh energy use" - shunning indie Devs that try to improve AI efficiency basically guarantees capture of AI by big players that can afford to just scale more instead of invest in better, post-transformer architecture.
I see the cynical view of AI displayed here as masking intellectual laziness, and I think it's frankly dishonest to our communities (that, if we want to still consider ourselves champions of scientific progress).
@budududuroiu Part of the problem is that different types of AI that have different functions (generally, predictive AI and generative AI) get lumped together. The generative AI proponents almost always lump these together in propagandistic or fantistical ways, e.g., "AI will solve global warming" (without any details) as an argument for building massive energy-using and water-wasting data centers.
-
@budududuroiu Part of the problem is that different types of AI that have different functions (generally, predictive AI and generative AI) get lumped together. The generative AI proponents almost always lump these together in propagandistic or fantistical ways, e.g., "AI will solve global warming" (without any details) as an argument for building massive energy-using and water-wasting data centers.
@Bongolian Well, they're right, AI _will_ solve global warming, because elites can effectively shelter in cooler climate temperatures, make use of AI & robotics for the labour they need to sustain their lives, and let vast amounts of humanity die due to various factors to do with climate. Due to the massive die-off of humans, emissions will probably decrease.
My argument is, the conversation is atm steered by AI hype and elites that go "uhm.... uhm.... depends" when questioned if they'd want humanity to survive, because the other side (Mastodon, etc.) refuses to engage with probably the most consequential invention in human existence.
-
@Bongolian Well, they're right, AI _will_ solve global warming, because elites can effectively shelter in cooler climate temperatures, make use of AI & robotics for the labour they need to sustain their lives, and let vast amounts of humanity die due to various factors to do with climate. Due to the massive die-off of humans, emissions will probably decrease.
My argument is, the conversation is atm steered by AI hype and elites that go "uhm.... uhm.... depends" when questioned if they'd want humanity to survive, because the other side (Mastodon, etc.) refuses to engage with probably the most consequential invention in human existence.
@budududuroiu "most consequential invention in human existence" is hubris. Arguably, the inventions of vaccines, antibiotics, and water sanitation were each far more consequential than generative AI ever will be.
-
We're doing ourselves a massive disservice by having an "no-AI no matter what" attitude.
LLMs and other Deep Learning models are doing advancements in science every other day. For a group of people that usually embraces scientific progress (Mastodon), I find the community here very dogmatic and against science if it comes from AI-assisted work.
I get it, there's a lot of slop that flooded the academic community. There also were big issues with the scientific community before AI: lack of funding for replication studies, publication bias, the file drawer problem, where studies which fail to reject the null hypothesis are less likely to be published than those that do produce a statistically significant result.
Oh the other hand, AlphaFold is pushing the boundaries of protein folding, MatterGen is producing many DFT-stable, database-novel crystal structures, LLMs continue to solve Erdos problems.
And there are open alternatives we should champion: OpenFold is attempting a permissive replication of AlphaFold 2, for example.
"But muh AI is fascism" - very Ameri-brained take, decentre yourself and fix your country, you can't lock the entire world out of scientific progress because you live in a failed state.
"But muh stealing work" - you're literally arguing JSTOR and Elsevier's point against Aaron Swarz
"But muh energy use" - shunning indie Devs that try to improve AI efficiency basically guarantees capture of AI by big players that can afford to just scale more instead of invest in better, post-transformer architecture.
I see the cynical view of AI displayed here as masking intellectual laziness, and I think it's frankly dishonest to our communities (that, if we want to still consider ourselves champions of scientific progress).
@budududuroiu
using the #noAI tag for this take is basically like using the #vegan tag tell everyone how good you think bacon is