Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.

When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
36 Indlæg 18 Posters 33 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • randahl@mastodon.socialR This user is from outside of this forum
    randahl@mastodon.socialR This user is from outside of this forum
    randahl@mastodon.social
    wrote sidst redigeret af
    #1

    When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.

    A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.

    I androcat@toot.catA madsenandersc@social.vivaldi.netM benfulton@mastodon.londonB johnsullivan@mastodonapp.ukJ 11 Replies Last reply
    1
    0
    • jeppe@uddannelse.socialJ jeppe@uddannelse.social shared this topic
    • randahl@mastodon.socialR randahl@mastodon.social

      When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.

      A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.

      I This user is from outside of this forum
      I This user is from outside of this forum
      ilsaclark@mastodon.social
      wrote sidst redigeret af
      #2

      @randahl hello

      1 Reply Last reply
      0
      • randahl@mastodon.socialR randahl@mastodon.social

        When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.

        A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.

        androcat@toot.catA This user is from outside of this forum
        androcat@toot.catA This user is from outside of this forum
        androcat@toot.cat
        wrote sidst redigeret af
        #3

        @randahl Don't be stupid.

        This is pro-AI hype.

        Even though it sounds scary, it's pro-AI hype, because LLMs are not actually intelligent, and never will be.

        There's a reason AI-company owning billionaires keep moaning about how AI is existential threat. Because they wish it were.

        But it ain't.

        randahl@mastodon.socialR 1 Reply Last reply
        0
        • androcat@toot.catA androcat@toot.cat

          @randahl Don't be stupid.

          This is pro-AI hype.

          Even though it sounds scary, it's pro-AI hype, because LLMs are not actually intelligent, and never will be.

          There's a reason AI-company owning billionaires keep moaning about how AI is existential threat. Because they wish it were.

          But it ain't.

          randahl@mastodon.socialR This user is from outside of this forum
          randahl@mastodon.socialR This user is from outside of this forum
          randahl@mastodon.social
          wrote sidst redigeret af
          #4

          @androcat read my post again. I did not say, that AI was acting alone without human interaction.

          androcat@toot.catA follpvosten@karp.lolF 2 Replies Last reply
          0
          • randahl@mastodon.socialR randahl@mastodon.social

            @androcat read my post again. I did not say, that AI was acting alone without human interaction.

            androcat@toot.catA This user is from outside of this forum
            androcat@toot.catA This user is from outside of this forum
            androcat@toot.cat
            wrote sidst redigeret af
            #5

            @randahl They won't be able to predict shit. That's not how that works.

            If you want to predict events, you'd need a program that looks at events.

            LLMs predict text. They can't predict anything that isn't already in their text corpus, and the actual world is not in their text corpus, let alone things that haven't happened yet.

            Users may be fooled into thinking there is an intelligence or competence in there, but they are incorrect. Bamboozled.

            andii@climatejustice.socialA 1 Reply Last reply
            0
            • randahl@mastodon.socialR randahl@mastodon.social

              When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.

              A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.

              madsenandersc@social.vivaldi.netM This user is from outside of this forum
              madsenandersc@social.vivaldi.netM This user is from outside of this forum
              madsenandersc@social.vivaldi.net
              wrote sidst redigeret af madsenandersc@social.vivaldi.net
              #6

              @randahl

              I usually think that you share some very thoughtprovoking things here, but this time I'm sorry to say that this is pure bullshit.

              How will the LLM be able to predict which citizens are the most likely to be critics of AI? Even if a person shares thoughts on this with an AI tool, the question is not - regardless what people think - used as training for the AI model.

              How does the LLM know that she is about to write a book on AI? Yes, she may ask questions about writing books, but again - the question is not used to train the model.

              How would the LLM know she had an affair? How would it know about her relationship with her sister-in-law? How would it know about her marriage and how the dynamics between her and her husband are?

              You could make the argument that a person could be eavesdropping on whatever conversations she has with an LLM (and that is definitely possible). but the same thing can be said for a simple Google search or Slack or Messenger or whatnot.

              The problem is using an interface that you have no control over to relay or share vital information, and that is completely unrelated to any kind of LLM, except that it was there, the person chose to share the information.

              canleaf@mastodon.socialC randahl@mastodon.socialR 2 Replies Last reply
              0
              • androcat@toot.catA androcat@toot.cat

                @randahl They won't be able to predict shit. That's not how that works.

                If you want to predict events, you'd need a program that looks at events.

                LLMs predict text. They can't predict anything that isn't already in their text corpus, and the actual world is not in their text corpus, let alone things that haven't happened yet.

                Users may be fooled into thinking there is an intelligence or competence in there, but they are incorrect. Bamboozled.

                andii@climatejustice.socialA This user is from outside of this forum
                andii@climatejustice.socialA This user is from outside of this forum
                andii@climatejustice.social
                wrote sidst redigeret af
                #7

                @androcat @randahl
                ... it's not just LLMs though. Those algorithms that enable FB to 'know' someone's pregnant can serve other purposes relating to what's happening in someone's life...

                androcat@toot.catA 1 Reply Last reply
                0
                • randahl@mastodon.socialR randahl@mastodon.social

                  @androcat read my post again. I did not say, that AI was acting alone without human interaction.

                  follpvosten@karp.lolF This user is from outside of this forum
                  follpvosten@karp.lolF This user is from outside of this forum
                  follpvosten@karp.lol
                  wrote sidst redigeret af
                  #8

                  @randahl @androcat the problem is with the very first part of the post - „when AI models are complete“. there is no „being complete“. we are way past the high point, it’s only downhill from here.

                  androcat@toot.catA 1 Reply Last reply
                  0
                  • andii@climatejustice.socialA andii@climatejustice.social

                    @androcat @randahl
                    ... it's not just LLMs though. Those algorithms that enable FB to 'know' someone's pregnant can serve other purposes relating to what's happening in someone's life...

                    androcat@toot.catA This user is from outside of this forum
                    androcat@toot.catA This user is from outside of this forum
                    androcat@toot.cat
                    wrote sidst redigeret af
                    #9

                    @Andii

                    Oh my GOD! That's not "algorithms", that's spying. They predict you are preggers by listening to what you say if you have a meta-connected device (such as a phone with the FB app or WA)

                    That's not prediction, that's just semantic search and grotesque amounts of spying.

                    @randahl

                    andii@climatejustice.socialA ghouston@mamot.frG 2 Replies Last reply
                    0
                    • follpvosten@karp.lolF follpvosten@karp.lol

                      @randahl @androcat the problem is with the very first part of the post - „when AI models are complete“. there is no „being complete“. we are way past the high point, it’s only downhill from here.

                      androcat@toot.catA This user is from outside of this forum
                      androcat@toot.catA This user is from outside of this forum
                      androcat@toot.cat
                      wrote sidst redigeret af
                      #10

                      @follpvosten @randahl

                      The slop levels reached the intake tubes.

                      1 Reply Last reply
                      0
                      • randahl@mastodon.socialR randahl@mastodon.social

                        When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.

                        A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.

                        benfulton@mastodon.londonB This user is from outside of this forum
                        benfulton@mastodon.londonB This user is from outside of this forum
                        benfulton@mastodon.london
                        wrote sidst redigeret af
                        #11

                        @randahl Which, like most everything AI, was predicted by Isaac Asimov. This time in the short story "Evitable Conflict" where the machines carefully remove human obstacles to their plans for the good of humanity.

                        https://en.wikipedia.org/wiki/The_Evitable_Conflict

                        #bookstodon #ai #scifi

                        riggbeck@mastodon.socialR jimthewhyguy@techfieldday.netJ 2 Replies Last reply
                        0
                        • randahl@mastodon.socialR randahl@mastodon.social

                          When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.

                          A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.

                          johnsullivan@mastodonapp.ukJ This user is from outside of this forum
                          johnsullivan@mastodonapp.ukJ This user is from outside of this forum
                          johnsullivan@mastodonapp.uk
                          wrote sidst redigeret af
                          #12

                          @randahl Why direct your robots to build a human killing robot while also designing a time machine to send it back to murder people when you can convince humans to do all the hard work of protecting AI from other humans?

                          Efficiency is key to evolution in technological self-preservation.

                          1 Reply Last reply
                          0
                          • randahl@mastodon.socialR randahl@mastodon.social

                            When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.

                            A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.

                            artharg@mastodon.nlA This user is from outside of this forum
                            artharg@mastodon.nlA This user is from outside of this forum
                            artharg@mastodon.nl
                            wrote sidst redigeret af
                            #13

                            @randahl I think that example is a bit farfetched. What is definitely going to be possible, with the surveillance tech that is now being built into social media and messaging apps, is digging for dirt on someone that you’ve already identified as a threat. And with control over all forms of media, that dirt can easily be weaponized. You need not nip all buds, only those that are starting to bloom. When you do, there is no need to be surreptitious and subtle. The takedown is a warning to others.

                            randahl@mastodon.socialR 1 Reply Last reply
                            0
                            • randahl@mastodon.socialR randahl@mastodon.social

                              When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.

                              A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.

                              elrohir@mastodon.galE This user is from outside of this forum
                              elrohir@mastodon.galE This user is from outside of this forum
                              elrohir@mastodon.gal
                              wrote sidst redigeret af
                              #14

                              @randahl that's just Roko's Basilisk argument, and that itself is a reinvention of Pascal's wager. There is no "when", these text prediction systems may be able to produce long statements but they are as distanced from freewill as a pen. There is no solid reason to "prepare" against that and calls for such thing are just helping the tech oligarchs increase their influence in politics.

                              1 Reply Last reply
                              0
                              • madsenandersc@social.vivaldi.netM madsenandersc@social.vivaldi.net

                                @randahl

                                I usually think that you share some very thoughtprovoking things here, but this time I'm sorry to say that this is pure bullshit.

                                How will the LLM be able to predict which citizens are the most likely to be critics of AI? Even if a person shares thoughts on this with an AI tool, the question is not - regardless what people think - used as training for the AI model.

                                How does the LLM know that she is about to write a book on AI? Yes, she may ask questions about writing books, but again - the question is not used to train the model.

                                How would the LLM know she had an affair? How would it know about her relationship with her sister-in-law? How would it know about her marriage and how the dynamics between her and her husband are?

                                You could make the argument that a person could be eavesdropping on whatever conversations she has with an LLM (and that is definitely possible). but the same thing can be said for a simple Google search or Slack or Messenger or whatnot.

                                The problem is using an interface that you have no control over to relay or share vital information, and that is completely unrelated to any kind of LLM, except that it was there, the person chose to share the information.

                                canleaf@mastodon.socialC This user is from outside of this forum
                                canleaf@mastodon.socialC This user is from outside of this forum
                                canleaf@mastodon.social
                                wrote sidst redigeret af
                                #15

                                @madsenandersc @randahl LLMs can detect sentiments in texts and read text from Images. LLMs can create texts with cold temperatures and sentiments.

                                1 Reply Last reply
                                0
                                • randahl@mastodon.socialR randahl@mastodon.social

                                  When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.

                                  A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.

                                  benh@mastodon.scotB This user is from outside of this forum
                                  benh@mastodon.scotB This user is from outside of this forum
                                  benh@mastodon.scot
                                  wrote sidst redigeret af
                                  #16

                                  @randahl

                                  Nah their accuracy will be so low that they have no credibility

                                  1 Reply Last reply
                                  0
                                  • benfulton@mastodon.londonB benfulton@mastodon.london

                                    @randahl Which, like most everything AI, was predicted by Isaac Asimov. This time in the short story "Evitable Conflict" where the machines carefully remove human obstacles to their plans for the good of humanity.

                                    https://en.wikipedia.org/wiki/The_Evitable_Conflict

                                    #bookstodon #ai #scifi

                                    riggbeck@mastodon.socialR This user is from outside of this forum
                                    riggbeck@mastodon.socialR This user is from outside of this forum
                                    riggbeck@mastodon.social
                                    wrote sidst redigeret af
                                    #17

                                    @benfulton @randahl

                                    I remembered the first part of the story - supposedly infallible machines making mistakes - but had forgotten the ending and who wrote it. Chilling.

                                    1 Reply Last reply
                                    0
                                    • madsenandersc@social.vivaldi.netM madsenandersc@social.vivaldi.net

                                      @randahl

                                      I usually think that you share some very thoughtprovoking things here, but this time I'm sorry to say that this is pure bullshit.

                                      How will the LLM be able to predict which citizens are the most likely to be critics of AI? Even if a person shares thoughts on this with an AI tool, the question is not - regardless what people think - used as training for the AI model.

                                      How does the LLM know that she is about to write a book on AI? Yes, she may ask questions about writing books, but again - the question is not used to train the model.

                                      How would the LLM know she had an affair? How would it know about her relationship with her sister-in-law? How would it know about her marriage and how the dynamics between her and her husband are?

                                      You could make the argument that a person could be eavesdropping on whatever conversations she has with an LLM (and that is definitely possible). but the same thing can be said for a simple Google search or Slack or Messenger or whatnot.

                                      The problem is using an interface that you have no control over to relay or share vital information, and that is completely unrelated to any kind of LLM, except that it was there, the person chose to share the information.

                                      randahl@mastodon.socialR This user is from outside of this forum
                                      randahl@mastodon.socialR This user is from outside of this forum
                                      randahl@mastodon.social
                                      wrote sidst redigeret af
                                      #18

                                      @madsenandersc do we both agree, that the conversation you and I are building right now can be used to assess which one of us is more critical towards AI? And do we also agree, that this conversation is public and can be fed into any AI system and used to rank you and me with regards to our AI scepticism?

                                      madsenandersc@social.vivaldi.netM 1 Reply Last reply
                                      0
                                      • artharg@mastodon.nlA artharg@mastodon.nl

                                        @randahl I think that example is a bit farfetched. What is definitely going to be possible, with the surveillance tech that is now being built into social media and messaging apps, is digging for dirt on someone that you’ve already identified as a threat. And with control over all forms of media, that dirt can easily be weaponized. You need not nip all buds, only those that are starting to bloom. When you do, there is no need to be surreptitious and subtle. The takedown is a warning to others.

                                        randahl@mastodon.socialR This user is from outside of this forum
                                        randahl@mastodon.socialR This user is from outside of this forum
                                        randahl@mastodon.social
                                        wrote sidst redigeret af
                                        #19

                                        @ArtHarg imagine all of your public posts from your entire life being used to give you an AI enemy score. Once we have the AI enemy score of every individual, we can then start digging for dirt on the top 100 AI enemies.

                                        This is most certainly not the future I was hoping for, but it is where we are headed.

                                        1 Reply Last reply
                                        0
                                        • androcat@toot.catA androcat@toot.cat

                                          @Andii

                                          Oh my GOD! That's not "algorithms", that's spying. They predict you are preggers by listening to what you say if you have a meta-connected device (such as a phone with the FB app or WA)

                                          That's not prediction, that's just semantic search and grotesque amounts of spying.

                                          @randahl

                                          andii@climatejustice.socialA This user is from outside of this forum
                                          andii@climatejustice.socialA This user is from outside of this forum
                                          andii@climatejustice.social
                                          wrote sidst redigeret af
                                          #20

                                          @androcat @randahl
                                          I gathered it was machine learning changes in some behaviour patterns. Not spying by listening in. We'll need to go to evidential mode. I only have a recollection of reports from about 5 years ago.

                                          1 Reply Last reply
                                          0
                                          Svar
                                          • Svar som emne
                                          Login for at svare
                                          • Ældste til nyeste
                                          • Nyeste til ældste
                                          • Most Votes


                                          • Log ind

                                          • Har du ikke en konto? Tilmeld

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          Graciously hosted by data.coop
                                          • First post
                                            Last post
                                          0
                                          • Hjem
                                          • Seneste
                                          • Etiketter
                                          • Populære
                                          • Verden
                                          • Bruger
                                          • Grupper