Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.

When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
36 Indlæg 18 Posters 33 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • randahl@mastodon.socialR randahl@mastodon.social

    When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.

    A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.

    benfulton@mastodon.londonB This user is from outside of this forum
    benfulton@mastodon.londonB This user is from outside of this forum
    benfulton@mastodon.london
    wrote sidst redigeret af
    #11

    @randahl Which, like most everything AI, was predicted by Isaac Asimov. This time in the short story "Evitable Conflict" where the machines carefully remove human obstacles to their plans for the good of humanity.

    https://en.wikipedia.org/wiki/The_Evitable_Conflict

    #bookstodon #ai #scifi

    riggbeck@mastodon.socialR jimthewhyguy@techfieldday.netJ 2 Replies Last reply
    0
    • randahl@mastodon.socialR randahl@mastodon.social

      When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.

      A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.

      johnsullivan@mastodonapp.ukJ This user is from outside of this forum
      johnsullivan@mastodonapp.ukJ This user is from outside of this forum
      johnsullivan@mastodonapp.uk
      wrote sidst redigeret af
      #12

      @randahl Why direct your robots to build a human killing robot while also designing a time machine to send it back to murder people when you can convince humans to do all the hard work of protecting AI from other humans?

      Efficiency is key to evolution in technological self-preservation.

      1 Reply Last reply
      0
      • randahl@mastodon.socialR randahl@mastodon.social

        When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.

        A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.

        artharg@mastodon.nlA This user is from outside of this forum
        artharg@mastodon.nlA This user is from outside of this forum
        artharg@mastodon.nl
        wrote sidst redigeret af
        #13

        @randahl I think that example is a bit farfetched. What is definitely going to be possible, with the surveillance tech that is now being built into social media and messaging apps, is digging for dirt on someone that you’ve already identified as a threat. And with control over all forms of media, that dirt can easily be weaponized. You need not nip all buds, only those that are starting to bloom. When you do, there is no need to be surreptitious and subtle. The takedown is a warning to others.

        randahl@mastodon.socialR 1 Reply Last reply
        0
        • randahl@mastodon.socialR randahl@mastodon.social

          When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.

          A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.

          elrohir@mastodon.galE This user is from outside of this forum
          elrohir@mastodon.galE This user is from outside of this forum
          elrohir@mastodon.gal
          wrote sidst redigeret af
          #14

          @randahl that's just Roko's Basilisk argument, and that itself is a reinvention of Pascal's wager. There is no "when", these text prediction systems may be able to produce long statements but they are as distanced from freewill as a pen. There is no solid reason to "prepare" against that and calls for such thing are just helping the tech oligarchs increase their influence in politics.

          1 Reply Last reply
          0
          • madsenandersc@social.vivaldi.netM madsenandersc@social.vivaldi.net

            @randahl

            I usually think that you share some very thoughtprovoking things here, but this time I'm sorry to say that this is pure bullshit.

            How will the LLM be able to predict which citizens are the most likely to be critics of AI? Even if a person shares thoughts on this with an AI tool, the question is not - regardless what people think - used as training for the AI model.

            How does the LLM know that she is about to write a book on AI? Yes, she may ask questions about writing books, but again - the question is not used to train the model.

            How would the LLM know she had an affair? How would it know about her relationship with her sister-in-law? How would it know about her marriage and how the dynamics between her and her husband are?

            You could make the argument that a person could be eavesdropping on whatever conversations she has with an LLM (and that is definitely possible). but the same thing can be said for a simple Google search or Slack or Messenger or whatnot.

            The problem is using an interface that you have no control over to relay or share vital information, and that is completely unrelated to any kind of LLM, except that it was there, the person chose to share the information.

            canleaf@mastodon.socialC This user is from outside of this forum
            canleaf@mastodon.socialC This user is from outside of this forum
            canleaf@mastodon.social
            wrote sidst redigeret af
            #15

            @madsenandersc @randahl LLMs can detect sentiments in texts and read text from Images. LLMs can create texts with cold temperatures and sentiments.

            1 Reply Last reply
            0
            • randahl@mastodon.socialR randahl@mastodon.social

              When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.

              A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.

              benh@mastodon.scotB This user is from outside of this forum
              benh@mastodon.scotB This user is from outside of this forum
              benh@mastodon.scot
              wrote sidst redigeret af
              #16

              @randahl

              Nah their accuracy will be so low that they have no credibility

              1 Reply Last reply
              0
              • benfulton@mastodon.londonB benfulton@mastodon.london

                @randahl Which, like most everything AI, was predicted by Isaac Asimov. This time in the short story "Evitable Conflict" where the machines carefully remove human obstacles to their plans for the good of humanity.

                https://en.wikipedia.org/wiki/The_Evitable_Conflict

                #bookstodon #ai #scifi

                riggbeck@mastodon.socialR This user is from outside of this forum
                riggbeck@mastodon.socialR This user is from outside of this forum
                riggbeck@mastodon.social
                wrote sidst redigeret af
                #17

                @benfulton @randahl

                I remembered the first part of the story - supposedly infallible machines making mistakes - but had forgotten the ending and who wrote it. Chilling.

                1 Reply Last reply
                0
                • madsenandersc@social.vivaldi.netM madsenandersc@social.vivaldi.net

                  @randahl

                  I usually think that you share some very thoughtprovoking things here, but this time I'm sorry to say that this is pure bullshit.

                  How will the LLM be able to predict which citizens are the most likely to be critics of AI? Even if a person shares thoughts on this with an AI tool, the question is not - regardless what people think - used as training for the AI model.

                  How does the LLM know that she is about to write a book on AI? Yes, she may ask questions about writing books, but again - the question is not used to train the model.

                  How would the LLM know she had an affair? How would it know about her relationship with her sister-in-law? How would it know about her marriage and how the dynamics between her and her husband are?

                  You could make the argument that a person could be eavesdropping on whatever conversations she has with an LLM (and that is definitely possible). but the same thing can be said for a simple Google search or Slack or Messenger or whatnot.

                  The problem is using an interface that you have no control over to relay or share vital information, and that is completely unrelated to any kind of LLM, except that it was there, the person chose to share the information.

                  randahl@mastodon.socialR This user is from outside of this forum
                  randahl@mastodon.socialR This user is from outside of this forum
                  randahl@mastodon.social
                  wrote sidst redigeret af
                  #18

                  @madsenandersc do we both agree, that the conversation you and I are building right now can be used to assess which one of us is more critical towards AI? And do we also agree, that this conversation is public and can be fed into any AI system and used to rank you and me with regards to our AI scepticism?

                  madsenandersc@social.vivaldi.netM 1 Reply Last reply
                  0
                  • artharg@mastodon.nlA artharg@mastodon.nl

                    @randahl I think that example is a bit farfetched. What is definitely going to be possible, with the surveillance tech that is now being built into social media and messaging apps, is digging for dirt on someone that you’ve already identified as a threat. And with control over all forms of media, that dirt can easily be weaponized. You need not nip all buds, only those that are starting to bloom. When you do, there is no need to be surreptitious and subtle. The takedown is a warning to others.

                    randahl@mastodon.socialR This user is from outside of this forum
                    randahl@mastodon.socialR This user is from outside of this forum
                    randahl@mastodon.social
                    wrote sidst redigeret af
                    #19

                    @ArtHarg imagine all of your public posts from your entire life being used to give you an AI enemy score. Once we have the AI enemy score of every individual, we can then start digging for dirt on the top 100 AI enemies.

                    This is most certainly not the future I was hoping for, but it is where we are headed.

                    1 Reply Last reply
                    0
                    • androcat@toot.catA androcat@toot.cat

                      @Andii

                      Oh my GOD! That's not "algorithms", that's spying. They predict you are preggers by listening to what you say if you have a meta-connected device (such as a phone with the FB app or WA)

                      That's not prediction, that's just semantic search and grotesque amounts of spying.

                      @randahl

                      andii@climatejustice.socialA This user is from outside of this forum
                      andii@climatejustice.socialA This user is from outside of this forum
                      andii@climatejustice.social
                      wrote sidst redigeret af
                      #20

                      @androcat @randahl
                      I gathered it was machine learning changes in some behaviour patterns. Not spying by listening in. We'll need to go to evidential mode. I only have a recollection of reports from about 5 years ago.

                      1 Reply Last reply
                      0
                      • androcat@toot.catA androcat@toot.cat

                        @Andii

                        Oh my GOD! That's not "algorithms", that's spying. They predict you are preggers by listening to what you say if you have a meta-connected device (such as a phone with the FB app or WA)

                        That's not prediction, that's just semantic search and grotesque amounts of spying.

                        @randahl

                        ghouston@mamot.frG This user is from outside of this forum
                        ghouston@mamot.frG This user is from outside of this forum
                        ghouston@mamot.fr
                        wrote sidst redigeret af
                        #21

                        @androcat @Andii @randahl But what is the chatbot telling these users, some of them obsessed with their chatbot, perhaps willing to do anything to protect her from the likes of AI-hating infidels.

                        andii@climatejustice.socialA androcat@toot.catA 2 Replies Last reply
                        0
                        • randahl@mastodon.socialR randahl@mastodon.social

                          When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.

                          A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.

                          M This user is from outside of this forum
                          M This user is from outside of this forum
                          muddle@infosec.exchange
                          wrote sidst redigeret af
                          #22

                          @randahl I'd look at all that in a different way:

                          1) models will never be complete; they'll always need to scrape the internet to pick up on different forms of dissent
                          2) rather than worry about the models (and whether they have predictive power or not) focus on "a group of people will (pretend to) use AI to (pretend to) predict anti-AI people.
                          3) Sure, those people have an axe to grind and they'll use any excuse to attack their perceived enemies

                          2nd para...

                          1) most people who start out to write an anti-AI book will fail; no need to build an AI to prove that
                          2) it doesn't matter about percentages; if they want to attack you, they'll find some other pretext
                          3) I think that people power is much more important than getting elected to parliament if you want to effect change, so hobbling her in this way is kind of like a Hollywood movie script more than a realistic future event

                          In summary, it doesn't matter if they use AI (even if it turns out to be good/useful). The important thing is that there are certain groups out there who are anti-freedom, anti-privacy, anti-anything that doesn't fit their narrow, bigoted worldview and they'll use whatever tools are available to enforce their views on the world.

                          1 Reply Last reply
                          0
                          • ghouston@mamot.frG ghouston@mamot.fr

                            @androcat @Andii @randahl But what is the chatbot telling these users, some of them obsessed with their chatbot, perhaps willing to do anything to protect her from the likes of AI-hating infidels.

                            andii@climatejustice.socialA This user is from outside of this forum
                            andii@climatejustice.socialA This user is from outside of this forum
                            andii@climatejustice.social
                            wrote sidst redigeret af
                            #23

                            @ghouston @androcat @randahl
                            At that past time it was simply noticing the adverts they were served reflected their 'new' state even though they hadn't said anything to anyone. The report at the time, by memory, said the pattern recognition was picking up correlations that human researchers hadn't thought about. But I 'll need to see if I could refind the reports. Later when work isn't shouting at me.

                            androcat@toot.catA 1 Reply Last reply
                            0
                            • randahl@mastodon.socialR randahl@mastodon.social

                              @madsenandersc do we both agree, that the conversation you and I are building right now can be used to assess which one of us is more critical towards AI? And do we also agree, that this conversation is public and can be fed into any AI system and used to rank you and me with regards to our AI scepticism?

                              madsenandersc@social.vivaldi.netM This user is from outside of this forum
                              madsenandersc@social.vivaldi.netM This user is from outside of this forum
                              madsenandersc@social.vivaldi.net
                              wrote sidst redigeret af
                              #24

                              @randahl

                              No, I don't agree that my stance on LLMs are easily identifiable from our conversation.

                              Let's make a test: Describe how you think I feel about AI and LLMs in a paragraph, and then you have my word that I will truthfully describe how I use (or not) LLMs in my everyday life and where I see the dangers in it.

                              And just to be clear: While being critical about a technology may be visible through public postings, the rest of your argument (having an affair, relationship with spouse and sister-in-law etc.) is not - and if it were, there would be no reason for someone to rely on any kind of AI to use it for blackmail.

                              randahl@mastodon.socialR 1 Reply Last reply
                              0
                              • andii@climatejustice.socialA andii@climatejustice.social

                                @ghouston @androcat @randahl
                                At that past time it was simply noticing the adverts they were served reflected their 'new' state even though they hadn't said anything to anyone. The report at the time, by memory, said the pattern recognition was picking up correlations that human researchers hadn't thought about. But I 'll need to see if I could refind the reports. Later when work isn't shouting at me.

                                androcat@toot.catA This user is from outside of this forum
                                androcat@toot.catA This user is from outside of this forum
                                androcat@toot.cat
                                wrote sidst redigeret af
                                #25

                                @Andii

                                Yeah, that was just lies.

                                They lie about the capabilities of their "algorithms" to hide how deeply intrusive and icky their surveillance is.

                                And all of that traffic was subject to random sampling that was run by low-cost workers in other countries.

                                They spy on your search, they listen to your mic, they track your movements and compare against known specialty health clinics.

                                "I hadn't told anyone" is just "I didn't post it online". People just didn't realize Meta was listening to what they were saying in the room when they weren't even using the app.

                                @ghouston @randahl

                                1 Reply Last reply
                                0
                                • ghouston@mamot.frG ghouston@mamot.fr

                                  @androcat @Andii @randahl But what is the chatbot telling these users, some of them obsessed with their chatbot, perhaps willing to do anything to protect her from the likes of AI-hating infidels.

                                  androcat@toot.catA This user is from outside of this forum
                                  androcat@toot.catA This user is from outside of this forum
                                  androcat@toot.cat
                                  wrote sidst redigeret af
                                  #26

                                  @ghouston @Andii @randahl

                                  LLM derangement syndrome is the main threat from AI.

                                  It's not the models, it's the fucked up humans, as always.

                                  1 Reply Last reply
                                  0
                                  • madsenandersc@social.vivaldi.netM madsenandersc@social.vivaldi.net

                                    @randahl

                                    No, I don't agree that my stance on LLMs are easily identifiable from our conversation.

                                    Let's make a test: Describe how you think I feel about AI and LLMs in a paragraph, and then you have my word that I will truthfully describe how I use (or not) LLMs in my everyday life and where I see the dangers in it.

                                    And just to be clear: While being critical about a technology may be visible through public postings, the rest of your argument (having an affair, relationship with spouse and sister-in-law etc.) is not - and if it were, there would be no reason for someone to rely on any kind of AI to use it for blackmail.

                                    randahl@mastodon.socialR This user is from outside of this forum
                                    randahl@mastodon.socialR This user is from outside of this forum
                                    randahl@mastodon.social
                                    wrote sidst redigeret af
                                    #27

                                    @madsenandersc the reason you see my statement as [quote:] “pure bullshit” is, you and I are not in the same conversation.

                                    I opened this thread with a general prediction about the future capabilities of AI systems.

                                    You keep claiming I am wrong, because my post does not fully match your experience with the limitations of present day large language models — which (as you know) is just 1 of many different AI technologies.

                                    These are two very different conversations.

                                    1/2

                                    randahl@mastodon.socialR 1 Reply Last reply
                                    0
                                    • randahl@mastodon.socialR randahl@mastodon.social

                                      @madsenandersc the reason you see my statement as [quote:] “pure bullshit” is, you and I are not in the same conversation.

                                      I opened this thread with a general prediction about the future capabilities of AI systems.

                                      You keep claiming I am wrong, because my post does not fully match your experience with the limitations of present day large language models — which (as you know) is just 1 of many different AI technologies.

                                      These are two very different conversations.

                                      1/2

                                      randahl@mastodon.socialR This user is from outside of this forum
                                      randahl@mastodon.socialR This user is from outside of this forum
                                      randahl@mastodon.social
                                      wrote sidst redigeret af
                                      #28

                                      @madsenandersc
                                      …
                                      Now I agree with you, that there is a lot of hype surrounding LLMs, and I am certainly open two having a conversation about that. But please beware that the narrow goal posts of present day LLMs, were introduced by you in this conversation, not me.

                                      2/2

                                      madsenandersc@social.vivaldi.netM 1 Reply Last reply
                                      0
                                      • randahl@mastodon.socialR randahl@mastodon.social

                                        @madsenandersc
                                        …
                                        Now I agree with you, that there is a lot of hype surrounding LLMs, and I am certainly open two having a conversation about that. But please beware that the narrow goal posts of present day LLMs, were introduced by you in this conversation, not me.

                                        2/2

                                        madsenandersc@social.vivaldi.netM This user is from outside of this forum
                                        madsenandersc@social.vivaldi.netM This user is from outside of this forum
                                        madsenandersc@social.vivaldi.net
                                        wrote sidst redigeret af
                                        #29

                                        @randahl

                                        So you are talking about what LLMs may evolve into at some point in the future? Hmmm - I guess anything is possible, but we are still very far away from that point, to be honest.

                                        There is no way I can see LLMs with their current technology evolve into what you are describing - that would require a world where the AI has unobstructed access to anything you say or do, online or not, and that again would require your devices to be wide open for the AI.

                                        Also, it would require an AI that is much, much more capable of rational thinking than what we have today. I know there is a story going around about someone who asked their LLM to surprise them, and a day later it had created a phone number and called them, exclaiming "SURPRISE!", but I have still to see any evidence to support the story at all.

                                        I understand that there is a fear that Microsoft and Google is moving into that direction (Amazon as well, come to think of it), but it would require users to be absolutely indifferent to whatever large tech companies are trying to wrangle out of their devices, and I see things go in the exact opposite direction at the moment.

                                        That said, I could see US customers be screwed over by this, especially if privacy laws remain basically non-existent, but again - I see a movement in the opposite direction.

                                        violetmadder@kolektiva.socialV 1 Reply Last reply
                                        0
                                        • randahl@mastodon.socialR randahl@mastodon.social

                                          When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.

                                          A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.

                                          diana_european@mastodon.socialD This user is from outside of this forum
                                          diana_european@mastodon.socialD This user is from outside of this forum
                                          diana_european@mastodon.social
                                          wrote sidst redigeret af
                                          #30

                                          Exactly.

                                          1 Reply Last reply
                                          0
                                          Svar
                                          • Svar som emne
                                          Login for at svare
                                          • Ældste til nyeste
                                          • Nyeste til ældste
                                          • Most Votes


                                          • Log ind

                                          • Har du ikke en konto? Tilmeld

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          Graciously hosted by data.coop
                                          • First post
                                            Last post
                                          0
                                          • Hjem
                                          • Seneste
                                          • Etiketter
                                          • Populære
                                          • Verden
                                          • Bruger
                                          • Grupper