I find it very frustrating that the only opinions people seem to be able to have about AI are:
-
I find it very frustrating that the only opinions people seem to be able to have about AI are:
A/ it's the next coming of Christ, just deploy it everywhere, and if it has any issues those will be fixed with the next update anyway
B/ it's completely useless, has never and will never do anything useful and it's all just a scam by tech bros who are too dumb to realize that it's useless
@Techaltar AI that is just LLMs is useless because it's only useful if there is someone who checks whatever bs it males.
Neural Networks to adjust white balance of photos, remove background from the video or clean the audio recording is advanced algorithm and not some digital sentient being that can be called AI, but they are the most useful.
-
I find it very frustrating that the only opinions people seem to be able to have about AI are:
A/ it's the next coming of Christ, just deploy it everywhere, and if it has any issues those will be fixed with the next update anyway
B/ it's completely useless, has never and will never do anything useful and it's all just a scam by tech bros who are too dumb to realize that it's useless
@Techaltar it's obviously overhyped, but I like the fact that it will put paparazzi and models out of a job.
-
I find it very frustrating that the only opinions people seem to be able to have about AI are:
A/ it's the next coming of Christ, just deploy it everywhere, and if it has any issues those will be fixed with the next update anyway
B/ it's completely useless, has never and will never do anything useful and it's all just a scam by tech bros who are too dumb to realize that it's useless
If it gives you any hope in humanity, I think AI will he incredibly useful to do dangerous work humans shouldn’t do, be used in manufacturing to detect defects, and used in the medical field to help patients (not cure cancer but detect anomalies etcetera).
Oh and no, I do not want the AI bubble to pop because I realise it will do more harm than good. Actually exclusively harm. No good. Nothing will get better, everything will get worse…
-
I find it very frustrating that the only opinions people seem to be able to have about AI are:
A/ it's the next coming of Christ, just deploy it everywhere, and if it has any issues those will be fixed with the next update anyway
B/ it's completely useless, has never and will never do anything useful and it's all just a scam by tech bros who are too dumb to realize that it's useless
@Techaltar it's not that it's useless, it's that it is dangerous. And I don't mean sci-fi movie dangerous, but "let's centralize all computation and knowledge with a few fascist companies and have the interaction with them be a manipulative chat bot". It scares me that people are relying on LLMs and that they are the ultimate manipulation tool owned by the least trustworthy people in the world. It scares me that they seem to destroy the alternatives to computation before they have a product
-
@Techaltar it's not that it's useless, it's that it is dangerous. And I don't mean sci-fi movie dangerous, but "let's centralize all computation and knowledge with a few fascist companies and have the interaction with them be a manipulative chat bot". It scares me that people are relying on LLMs and that they are the ultimate manipulation tool owned by the least trustworthy people in the world. It scares me that they seem to destroy the alternatives to computation before they have a product
@Techaltar I believe a chat interface is a very inefficient way to interact with the computer, that's why all the long LLM text files are needed, but we are lulled into a false sense of usability because they are manipulative in they way they respond. There are probably other more efficient ways to use LLMs, but they are not shiny or flashy and therefore don't convince investors or users. RAG is very neat and a powerful lookup technology, but everyone suddenly expects a conversation partner
-
@Techaltar I believe a chat interface is a very inefficient way to interact with the computer, that's why all the long LLM text files are needed, but we are lulled into a false sense of usability because they are manipulative in they way they respond. There are probably other more efficient ways to use LLMs, but they are not shiny or flashy and therefore don't convince investors or users. RAG is very neat and a powerful lookup technology, but everyone suddenly expects a conversation partner
@Techaltar I'm also vomit-in-mouth disgusted by the gullibility and manipulativeness whenever I see a headline like "AI finds tumors" and everyone is like "see, ChatGPT is intelligent!". The way AI has been reduced to a meaningless word has destroyed a lot of truly useful AI research
-
I find it very frustrating that the only opinions people seem to be able to have about AI are:
A/ it's the next coming of Christ, just deploy it everywhere, and if it has any issues those will be fixed with the next update anyway
B/ it's completely useless, has never and will never do anything useful and it's all just a scam by tech bros who are too dumb to realize that it's useless
@Techaltar IMO the empirical evidence for option B can't be ignored. I don't think that the tech bros, except a few that are really lost to religious fervor, are to dump to understand that it is useless, they just can't act on that knowledge because that would pop the bubble. Trivial insight, I know.
Notably the few that do speak up note that the central problem of hallucinations can't be fixed with the current approach. I have read statements to that effect out of the Meta and Google camp.
-
I find it very frustrating that the only opinions people seem to be able to have about AI are:
A/ it's the next coming of Christ, just deploy it everywhere, and if it has any issues those will be fixed with the next update anyway
B/ it's completely useless, has never and will never do anything useful and it's all just a scam by tech bros who are too dumb to realize that it's useless
@Techaltar Lack of nuance? On the internet??
-
I find it very frustrating that the only opinions people seem to be able to have about AI are:
A/ it's the next coming of Christ, just deploy it everywhere, and if it has any issues those will be fixed with the next update anyway
B/ it's completely useless, has never and will never do anything useful and it's all just a scam by tech bros who are too dumb to realize that it's useless
@Techaltar@mas.to I think it has some applications in some areas but it's none of the areas that techbros are using it in. People are using it like an oracle of knowledge, rather than using it for what it's good at.
-
@Techaltar@mas.to I think it has some applications in some areas but it's none of the areas that techbros are using it in. People are using it like an oracle of knowledge, rather than using it for what it's good at.
@Techaltar@mas.to I also hate the oversimplification of all machine learning to """AI""" and therefore bad- machine learning has been used in a number of fields to great effect in the past- like noise compression and image recognition, and it's great for those purposes- when you're using a purpose-built model.
People need to learn to differentiate the two, imo. -
I find it very frustrating that the only opinions people seem to be able to have about AI are:
A/ it's the next coming of Christ, just deploy it everywhere, and if it has any issues those will be fixed with the next update anyway
B/ it's completely useless, has never and will never do anything useful and it's all just a scam by tech bros who are too dumb to realize that it's useless
I think that the polarization is primarily because of the two camps being very vocal, while most users simply use it and gets on with their day.
I've got a local Ollama-installation running gpt-oss and mistral, and I generally use them as a souped-up search engine when I need to fetch and organise information from many sources - that saves me a lot of time, compared to doing the searches myself.
Speaking with other people in my professional orbit, that seems to be a common use, together with things like comparing business offers with what has been requested - did we remember everything they asked for, and did we format things in a easily read, informative way, e.g.
I can easily see the problems with AI being used to generate art or work that mimics the work of others, but I acknowledge that it is going to be hard to setup a legal framework that can stop this. Training AI models on things that are on the internet is basically using the open nature of the internet to your advantage, and I really have no idea about how to prevent that.