When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.
-
@androcat read my post again. I did not say, that AI was acting alone without human interaction.
@randahl They won't be able to predict shit. That's not how that works.
If you want to predict events, you'd need a program that looks at events.
LLMs predict text. They can't predict anything that isn't already in their text corpus, and the actual world is not in their text corpus, let alone things that haven't happened yet.
Users may be fooled into thinking there is an intelligence or competence in there, but they are incorrect. Bamboozled.
-
When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.
A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.
I usually think that you share some very thoughtprovoking things here, but this time I'm sorry to say that this is pure bullshit.
How will the LLM be able to predict which citizens are the most likely to be critics of AI? Even if a person shares thoughts on this with an AI tool, the question is not - regardless what people think - used as training for the AI model.
How does the LLM know that she is about to write a book on AI? Yes, she may ask questions about writing books, but again - the question is not used to train the model.
How would the LLM know she had an affair? How would it know about her relationship with her sister-in-law? How would it know about her marriage and how the dynamics between her and her husband are?
You could make the argument that a person could be eavesdropping on whatever conversations she has with an LLM (and that is definitely possible). but the same thing can be said for a simple Google search or Slack or Messenger or whatnot.
The problem is using an interface that you have no control over to relay or share vital information, and that is completely unrelated to any kind of LLM, except that it was there, the person chose to share the information.
-
@randahl They won't be able to predict shit. That's not how that works.
If you want to predict events, you'd need a program that looks at events.
LLMs predict text. They can't predict anything that isn't already in their text corpus, and the actual world is not in their text corpus, let alone things that haven't happened yet.
Users may be fooled into thinking there is an intelligence or competence in there, but they are incorrect. Bamboozled.
-
@androcat read my post again. I did not say, that AI was acting alone without human interaction.
-
-
The slop levels reached the intake tubes.
-
When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.
A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.
@randahl Which, like most everything AI, was predicted by Isaac Asimov. This time in the short story "Evitable Conflict" where the machines carefully remove human obstacles to their plans for the good of humanity.
-
When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.
A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.
@randahl Why direct your robots to build a human killing robot while also designing a time machine to send it back to murder people when you can convince humans to do all the hard work of protecting AI from other humans?
Efficiency is key to evolution in technological self-preservation.
-
When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.
A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.
@randahl I think that example is a bit farfetched. What is definitely going to be possible, with the surveillance tech that is now being built into social media and messaging apps, is digging for dirt on someone that you’ve already identified as a threat. And with control over all forms of media, that dirt can easily be weaponized. You need not nip all buds, only those that are starting to bloom. When you do, there is no need to be surreptitious and subtle. The takedown is a warning to others.
-
When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.
A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.
@randahl that's just Roko's Basilisk argument, and that itself is a reinvention of Pascal's wager. There is no "when", these text prediction systems may be able to produce long statements but they are as distanced from freewill as a pen. There is no solid reason to "prepare" against that and calls for such thing are just helping the tech oligarchs increase their influence in politics.
-
I usually think that you share some very thoughtprovoking things here, but this time I'm sorry to say that this is pure bullshit.
How will the LLM be able to predict which citizens are the most likely to be critics of AI? Even if a person shares thoughts on this with an AI tool, the question is not - regardless what people think - used as training for the AI model.
How does the LLM know that she is about to write a book on AI? Yes, she may ask questions about writing books, but again - the question is not used to train the model.
How would the LLM know she had an affair? How would it know about her relationship with her sister-in-law? How would it know about her marriage and how the dynamics between her and her husband are?
You could make the argument that a person could be eavesdropping on whatever conversations she has with an LLM (and that is definitely possible). but the same thing can be said for a simple Google search or Slack or Messenger or whatnot.
The problem is using an interface that you have no control over to relay or share vital information, and that is completely unrelated to any kind of LLM, except that it was there, the person chose to share the information.
@madsenandersc @randahl LLMs can detect sentiments in texts and read text from Images. LLMs can create texts with cold temperatures and sentiments.
-
When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.
A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.
Nah their accuracy will be so low that they have no credibility
-
@randahl Which, like most everything AI, was predicted by Isaac Asimov. This time in the short story "Evitable Conflict" where the machines carefully remove human obstacles to their plans for the good of humanity.
I remembered the first part of the story - supposedly infallible machines making mistakes - but had forgotten the ending and who wrote it. Chilling.
-
I usually think that you share some very thoughtprovoking things here, but this time I'm sorry to say that this is pure bullshit.
How will the LLM be able to predict which citizens are the most likely to be critics of AI? Even if a person shares thoughts on this with an AI tool, the question is not - regardless what people think - used as training for the AI model.
How does the LLM know that she is about to write a book on AI? Yes, she may ask questions about writing books, but again - the question is not used to train the model.
How would the LLM know she had an affair? How would it know about her relationship with her sister-in-law? How would it know about her marriage and how the dynamics between her and her husband are?
You could make the argument that a person could be eavesdropping on whatever conversations she has with an LLM (and that is definitely possible). but the same thing can be said for a simple Google search or Slack or Messenger or whatnot.
The problem is using an interface that you have no control over to relay or share vital information, and that is completely unrelated to any kind of LLM, except that it was there, the person chose to share the information.
@madsenandersc do we both agree, that the conversation you and I are building right now can be used to assess which one of us is more critical towards AI? And do we also agree, that this conversation is public and can be fed into any AI system and used to rank you and me with regards to our AI scepticism?
-
@randahl I think that example is a bit farfetched. What is definitely going to be possible, with the surveillance tech that is now being built into social media and messaging apps, is digging for dirt on someone that you’ve already identified as a threat. And with control over all forms of media, that dirt can easily be weaponized. You need not nip all buds, only those that are starting to bloom. When you do, there is no need to be surreptitious and subtle. The takedown is a warning to others.
@ArtHarg imagine all of your public posts from your entire life being used to give you an AI enemy score. Once we have the AI enemy score of every individual, we can then start digging for dirt on the top 100 AI enemies.
This is most certainly not the future I was hoping for, but it is where we are headed.
-
-
-
When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.
A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.
@randahl I'd look at all that in a different way:
1) models will never be complete; they'll always need to scrape the internet to pick up on different forms of dissent
2) rather than worry about the models (and whether they have predictive power or not) focus on "a group of people will (pretend to) use AI to (pretend to) predict anti-AI people.
3) Sure, those people have an axe to grind and they'll use any excuse to attack their perceived enemies2nd para...
1) most people who start out to write an anti-AI book will fail; no need to build an AI to prove that
2) it doesn't matter about percentages; if they want to attack you, they'll find some other pretext
3) I think that people power is much more important than getting elected to parliament if you want to effect change, so hobbling her in this way is kind of like a Hollywood movie script more than a realistic future eventIn summary, it doesn't matter if they use AI (even if it turns out to be good/useful). The important thing is that there are certain groups out there who are anti-freedom, anti-privacy, anti-anything that doesn't fit their narrow, bigoted worldview and they'll use whatever tools are available to enforce their views on the world.
-
@ghouston @androcat @randahl
At that past time it was simply noticing the adverts they were served reflected their 'new' state even though they hadn't said anything to anyone. The report at the time, by memory, said the pattern recognition was picking up correlations that human researchers hadn't thought about. But I 'll need to see if I could refind the reports. Later when work isn't shouting at me. -
@madsenandersc do we both agree, that the conversation you and I are building right now can be used to assess which one of us is more critical towards AI? And do we also agree, that this conversation is public and can be fed into any AI system and used to rank you and me with regards to our AI scepticism?
No, I don't agree that my stance on LLMs are easily identifiable from our conversation.
Let's make a test: Describe how you think I feel about AI and LLMs in a paragraph, and then you have my word that I will truthfully describe how I use (or not) LLMs in my everyday life and where I see the dangers in it.
And just to be clear: While being critical about a technology may be visible through public postings, the rest of your argument (having an affair, relationship with spouse and sister-in-law etc.) is not - and if it were, there would be no reason for someone to rely on any kind of AI to use it for blackmail.