I find it very frustrating that the only opinions people seem to be able to have about AI are:
-
@Techaltar Mine is more that the ethical issues are serious enough that it taints it to the degree that it should not be used until those are resolved.
It also doesn’t help that we call everything AI, because there’s ML stuff that’s ethical and very useful but it’s now called the same as the toxic sludge that is being force fed to everyone.
@ainmosni yeah that is fair enough, but do you actually see any path towards a resolution? As a content creator whose content was scraped to train AI on and whose job is at risk of being replaced by the very AI that was created, I have every incentive to be angry. But I also just ... don't see how you put the genie back in the bottle?
-
I find it very frustrating that the only opinions people seem to be able to have about AI are:
A/ it's the next coming of Christ, just deploy it everywhere, and if it has any issues those will be fixed with the next update anyway
B/ it's completely useless, has never and will never do anything useful and it's all just a scam by tech bros who are too dumb to realize that it's useless
@Techaltar yes, not sure why group A just doesn't realice their mistake
-
I find it very frustrating that the only opinions people seem to be able to have about AI are:
A/ it's the next coming of Christ, just deploy it everywhere, and if it has any issues those will be fixed with the next update anyway
B/ it's completely useless, has never and will never do anything useful and it's all just a scam by tech bros who are too dumb to realize that it's useless
@Techaltar I think there are quite a few people who think it has some use cases, and has been shown to be beneficial in those cases. But that nuance is lost in the flood of option A, which means if anyone says anything negative they get lumped into B.
-
@ainmosni yeah that is fair enough, but do you actually see any path towards a resolution? As a content creator whose content was scraped to train AI on and whose job is at risk of being replaced by the very AI that was created, I have every incentive to be angry. But I also just ... don't see how you put the genie back in the bottle?
@Techaltar My hope is that it they never find a way to make the huge models profitable and that that simply goes out of business. And considering the amount of resources needed to train those, stuff like ollama will then start to go stale as well.
Sadly, that doesn’t mean that state actors won’t have access to stuff like that for deepfakes, which will still be a big problem, and there might be a use for small specialised models, but that will probably not be half as destructive.
That is my realistic hope, combined with me continually being surprised that many normal people look down at generated stuff.
-
I find it very frustrating that the only opinions people seem to be able to have about AI are:
A/ it's the next coming of Christ, just deploy it everywhere, and if it has any issues those will be fixed with the next update anyway
B/ it's completely useless, has never and will never do anything useful and it's all just a scam by tech bros who are too dumb to realize that it's useless
@Techaltar I think that AI is an incredibly cool technology but the current approach of just using general purpose massive LLMs to answer to prompts and hoping it'll consistently do useful work on its own is a dead end. Specialized models trained to utilize custom logic/plugin scripts (for actual precision and consistency) and relevant inputs set up by people who know what they're doing seems like the most powerful use case to me.
Is there something I'm blind to/missing? -
I find it very frustrating that the only opinions people seem to be able to have about AI are:
A/ it's the next coming of Christ, just deploy it everywhere, and if it has any issues those will be fixed with the next update anyway
B/ it's completely useless, has never and will never do anything useful and it's all just a scam by tech bros who are too dumb to realize that it's useless
@Techaltar@mas.to
Extremism leads to voluntary disability. It doesn't matter how able bodied a person is, if they have an extremist view, it becomes near impossible for them to see or hear. -
I find it very frustrating that the only opinions people seem to be able to have about AI are:
A/ it's the next coming of Christ, just deploy it everywhere, and if it has any issues those will be fixed with the next update anyway
B/ it's completely useless, has never and will never do anything useful and it's all just a scam by tech bros who are too dumb to realize that it's useless
@Techaltar
It's hard to be excited about gAI. And when you know more there is less room for it.
AI it's been around for decades but it has always been for specific case/knowledge. Now, this has become a data Nyarlathotep that it's destroying so many things (art rights, economy, technology, nature, human rights e.g. Palantir...). The more you try to fix it the more you accept Dune's solution. -
I find it very frustrating that the only opinions people seem to be able to have about AI are:
A/ it's the next coming of Christ, just deploy it everywhere, and if it has any issues those will be fixed with the next update anyway
B/ it's completely useless, has never and will never do anything useful and it's all just a scam by tech bros who are too dumb to realize that it's useless
@Techaltar AI that is just LLMs is useless because it's only useful if there is someone who checks whatever bs it males.
Neural Networks to adjust white balance of photos, remove background from the video or clean the audio recording is advanced algorithm and not some digital sentient being that can be called AI, but they are the most useful.
-
I find it very frustrating that the only opinions people seem to be able to have about AI are:
A/ it's the next coming of Christ, just deploy it everywhere, and if it has any issues those will be fixed with the next update anyway
B/ it's completely useless, has never and will never do anything useful and it's all just a scam by tech bros who are too dumb to realize that it's useless
@Techaltar it's obviously overhyped, but I like the fact that it will put paparazzi and models out of a job.
-
I find it very frustrating that the only opinions people seem to be able to have about AI are:
A/ it's the next coming of Christ, just deploy it everywhere, and if it has any issues those will be fixed with the next update anyway
B/ it's completely useless, has never and will never do anything useful and it's all just a scam by tech bros who are too dumb to realize that it's useless
If it gives you any hope in humanity, I think AI will he incredibly useful to do dangerous work humans shouldn’t do, be used in manufacturing to detect defects, and used in the medical field to help patients (not cure cancer but detect anomalies etcetera).
Oh and no, I do not want the AI bubble to pop because I realise it will do more harm than good. Actually exclusively harm. No good. Nothing will get better, everything will get worse…
-
I find it very frustrating that the only opinions people seem to be able to have about AI are:
A/ it's the next coming of Christ, just deploy it everywhere, and if it has any issues those will be fixed with the next update anyway
B/ it's completely useless, has never and will never do anything useful and it's all just a scam by tech bros who are too dumb to realize that it's useless
@Techaltar it's not that it's useless, it's that it is dangerous. And I don't mean sci-fi movie dangerous, but "let's centralize all computation and knowledge with a few fascist companies and have the interaction with them be a manipulative chat bot". It scares me that people are relying on LLMs and that they are the ultimate manipulation tool owned by the least trustworthy people in the world. It scares me that they seem to destroy the alternatives to computation before they have a product
-
@Techaltar it's not that it's useless, it's that it is dangerous. And I don't mean sci-fi movie dangerous, but "let's centralize all computation and knowledge with a few fascist companies and have the interaction with them be a manipulative chat bot". It scares me that people are relying on LLMs and that they are the ultimate manipulation tool owned by the least trustworthy people in the world. It scares me that they seem to destroy the alternatives to computation before they have a product
@Techaltar I believe a chat interface is a very inefficient way to interact with the computer, that's why all the long LLM text files are needed, but we are lulled into a false sense of usability because they are manipulative in they way they respond. There are probably other more efficient ways to use LLMs, but they are not shiny or flashy and therefore don't convince investors or users. RAG is very neat and a powerful lookup technology, but everyone suddenly expects a conversation partner
-
@Techaltar I believe a chat interface is a very inefficient way to interact with the computer, that's why all the long LLM text files are needed, but we are lulled into a false sense of usability because they are manipulative in they way they respond. There are probably other more efficient ways to use LLMs, but they are not shiny or flashy and therefore don't convince investors or users. RAG is very neat and a powerful lookup technology, but everyone suddenly expects a conversation partner
@Techaltar I'm also vomit-in-mouth disgusted by the gullibility and manipulativeness whenever I see a headline like "AI finds tumors" and everyone is like "see, ChatGPT is intelligent!". The way AI has been reduced to a meaningless word has destroyed a lot of truly useful AI research
-
I find it very frustrating that the only opinions people seem to be able to have about AI are:
A/ it's the next coming of Christ, just deploy it everywhere, and if it has any issues those will be fixed with the next update anyway
B/ it's completely useless, has never and will never do anything useful and it's all just a scam by tech bros who are too dumb to realize that it's useless
@Techaltar IMO the empirical evidence for option B can't be ignored. I don't think that the tech bros, except a few that are really lost to religious fervor, are to dump to understand that it is useless, they just can't act on that knowledge because that would pop the bubble. Trivial insight, I know.
Notably the few that do speak up note that the central problem of hallucinations can't be fixed with the current approach. I have read statements to that effect out of the Meta and Google camp.
-
I find it very frustrating that the only opinions people seem to be able to have about AI are:
A/ it's the next coming of Christ, just deploy it everywhere, and if it has any issues those will be fixed with the next update anyway
B/ it's completely useless, has never and will never do anything useful and it's all just a scam by tech bros who are too dumb to realize that it's useless
@Techaltar Lack of nuance? On the internet??
-
I find it very frustrating that the only opinions people seem to be able to have about AI are:
A/ it's the next coming of Christ, just deploy it everywhere, and if it has any issues those will be fixed with the next update anyway
B/ it's completely useless, has never and will never do anything useful and it's all just a scam by tech bros who are too dumb to realize that it's useless
@Techaltar@mas.to I think it has some applications in some areas but it's none of the areas that techbros are using it in. People are using it like an oracle of knowledge, rather than using it for what it's good at.
-
@Techaltar@mas.to I think it has some applications in some areas but it's none of the areas that techbros are using it in. People are using it like an oracle of knowledge, rather than using it for what it's good at.
@Techaltar@mas.to I also hate the oversimplification of all machine learning to """AI""" and therefore bad- machine learning has been used in a number of fields to great effect in the past- like noise compression and image recognition, and it's great for those purposes- when you're using a purpose-built model.
People need to learn to differentiate the two, imo. -
I find it very frustrating that the only opinions people seem to be able to have about AI are:
A/ it's the next coming of Christ, just deploy it everywhere, and if it has any issues those will be fixed with the next update anyway
B/ it's completely useless, has never and will never do anything useful and it's all just a scam by tech bros who are too dumb to realize that it's useless
I think that the polarization is primarily because of the two camps being very vocal, while most users simply use it and gets on with their day.
I've got a local Ollama-installation running gpt-oss and mistral, and I generally use them as a souped-up search engine when I need to fetch and organise information from many sources - that saves me a lot of time, compared to doing the searches myself.
Speaking with other people in my professional orbit, that seems to be a common use, together with things like comparing business offers with what has been requested - did we remember everything they asked for, and did we format things in a easily read, informative way, e.g.
I can easily see the problems with AI being used to generate art or work that mimics the work of others, but I acknowledge that it is going to be hard to setup a legal framework that can stop this. Training AI models on things that are on the internet is basically using the open nature of the internet to your advantage, and I really have no idea about how to prevent that.