When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.
-
When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.
A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.
-
J jeppe@uddannelse.social shared this topic
-
When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.
A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.
@randahl hello
-
When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.
A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.
@randahl Don't be stupid.
This is pro-AI hype.
Even though it sounds scary, it's pro-AI hype, because LLMs are not actually intelligent, and never will be.
There's a reason AI-company owning billionaires keep moaning about how AI is existential threat. Because they wish it were.
But it ain't.
-
@randahl Don't be stupid.
This is pro-AI hype.
Even though it sounds scary, it's pro-AI hype, because LLMs are not actually intelligent, and never will be.
There's a reason AI-company owning billionaires keep moaning about how AI is existential threat. Because they wish it were.
But it ain't.
@androcat read my post again. I did not say, that AI was acting alone without human interaction.
-
@androcat read my post again. I did not say, that AI was acting alone without human interaction.
@randahl They won't be able to predict shit. That's not how that works.
If you want to predict events, you'd need a program that looks at events.
LLMs predict text. They can't predict anything that isn't already in their text corpus, and the actual world is not in their text corpus, let alone things that haven't happened yet.
Users may be fooled into thinking there is an intelligence or competence in there, but they are incorrect. Bamboozled.
-
When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.
A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.
I usually think that you share some very thoughtprovoking things here, but this time I'm sorry to say that this is pure bullshit.
How will the LLM be able to predict which citizens are the most likely to be critics of AI? Even if a person shares thoughts on this with an AI tool, the question is not - regardless what people think - used as training for the AI model.
How does the LLM know that she is about to write a book on AI? Yes, she may ask questions about writing books, but again - the question is not used to train the model.
How would the LLM know she had an affair? How would it know about her relationship with her sister-in-law? How would it know about her marriage and how the dynamics between her and her husband are?
You could make the argument that a person could be eavesdropping on whatever conversations she has with an LLM (and that is definitely possible). but the same thing can be said for a simple Google search or Slack or Messenger or whatnot.
The problem is using an interface that you have no control over to relay or share vital information, and that is completely unrelated to any kind of LLM, except that it was there, the person chose to share the information.
-
@randahl They won't be able to predict shit. That's not how that works.
If you want to predict events, you'd need a program that looks at events.
LLMs predict text. They can't predict anything that isn't already in their text corpus, and the actual world is not in their text corpus, let alone things that haven't happened yet.
Users may be fooled into thinking there is an intelligence or competence in there, but they are incorrect. Bamboozled.
-
@androcat read my post again. I did not say, that AI was acting alone without human interaction.
-
-
The slop levels reached the intake tubes.
-
When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.
A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.
@randahl Which, like most everything AI, was predicted by Isaac Asimov. This time in the short story "Evitable Conflict" where the machines carefully remove human obstacles to their plans for the good of humanity.
-
When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.
A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.
@randahl Why direct your robots to build a human killing robot while also designing a time machine to send it back to murder people when you can convince humans to do all the hard work of protecting AI from other humans?
Efficiency is key to evolution in technological self-preservation.
-
When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.
A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.
@randahl I think that example is a bit farfetched. What is definitely going to be possible, with the surveillance tech that is now being built into social media and messaging apps, is digging for dirt on someone that you’ve already identified as a threat. And with control over all forms of media, that dirt can easily be weaponized. You need not nip all buds, only those that are starting to bloom. When you do, there is no need to be surreptitious and subtle. The takedown is a warning to others.
-
When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.
A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.
@randahl that's just Roko's Basilisk argument, and that itself is a reinvention of Pascal's wager. There is no "when", these text prediction systems may be able to produce long statements but they are as distanced from freewill as a pen. There is no solid reason to "prepare" against that and calls for such thing are just helping the tech oligarchs increase their influence in politics.
-
I usually think that you share some very thoughtprovoking things here, but this time I'm sorry to say that this is pure bullshit.
How will the LLM be able to predict which citizens are the most likely to be critics of AI? Even if a person shares thoughts on this with an AI tool, the question is not - regardless what people think - used as training for the AI model.
How does the LLM know that she is about to write a book on AI? Yes, she may ask questions about writing books, but again - the question is not used to train the model.
How would the LLM know she had an affair? How would it know about her relationship with her sister-in-law? How would it know about her marriage and how the dynamics between her and her husband are?
You could make the argument that a person could be eavesdropping on whatever conversations she has with an LLM (and that is definitely possible). but the same thing can be said for a simple Google search or Slack or Messenger or whatnot.
The problem is using an interface that you have no control over to relay or share vital information, and that is completely unrelated to any kind of LLM, except that it was there, the person chose to share the information.
@madsenandersc @randahl LLMs can detect sentiments in texts and read text from Images. LLMs can create texts with cold temperatures and sentiments.
-
When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.
A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.
Nah their accuracy will be so low that they have no credibility
-
@randahl Which, like most everything AI, was predicted by Isaac Asimov. This time in the short story "Evitable Conflict" where the machines carefully remove human obstacles to their plans for the good of humanity.
I remembered the first part of the story - supposedly infallible machines making mistakes - but had forgotten the ending and who wrote it. Chilling.
-
I usually think that you share some very thoughtprovoking things here, but this time I'm sorry to say that this is pure bullshit.
How will the LLM be able to predict which citizens are the most likely to be critics of AI? Even if a person shares thoughts on this with an AI tool, the question is not - regardless what people think - used as training for the AI model.
How does the LLM know that she is about to write a book on AI? Yes, she may ask questions about writing books, but again - the question is not used to train the model.
How would the LLM know she had an affair? How would it know about her relationship with her sister-in-law? How would it know about her marriage and how the dynamics between her and her husband are?
You could make the argument that a person could be eavesdropping on whatever conversations she has with an LLM (and that is definitely possible). but the same thing can be said for a simple Google search or Slack or Messenger or whatnot.
The problem is using an interface that you have no control over to relay or share vital information, and that is completely unrelated to any kind of LLM, except that it was there, the person chose to share the information.
@madsenandersc do we both agree, that the conversation you and I are building right now can be used to assess which one of us is more critical towards AI? And do we also agree, that this conversation is public and can be fed into any AI system and used to rank you and me with regards to our AI scepticism?
-
@randahl I think that example is a bit farfetched. What is definitely going to be possible, with the surveillance tech that is now being built into social media and messaging apps, is digging for dirt on someone that you’ve already identified as a threat. And with control over all forms of media, that dirt can easily be weaponized. You need not nip all buds, only those that are starting to bloom. When you do, there is no need to be surreptitious and subtle. The takedown is a warning to others.
@ArtHarg imagine all of your public posts from your entire life being used to give you an AI enemy score. Once we have the AI enemy score of every individual, we can then start digging for dirt on the top 100 AI enemies.
This is most certainly not the future I was hoping for, but it is where we are headed.
-