Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.

When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
36 Indlæg 18 Posters 33 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • andii@climatejustice.socialA andii@climatejustice.social

    @ghouston @androcat @randahl
    At that past time it was simply noticing the adverts they were served reflected their 'new' state even though they hadn't said anything to anyone. The report at the time, by memory, said the pattern recognition was picking up correlations that human researchers hadn't thought about. But I 'll need to see if I could refind the reports. Later when work isn't shouting at me.

    androcat@toot.catA This user is from outside of this forum
    androcat@toot.catA This user is from outside of this forum
    androcat@toot.cat
    wrote sidst redigeret af
    #25

    @Andii

    Yeah, that was just lies.

    They lie about the capabilities of their "algorithms" to hide how deeply intrusive and icky their surveillance is.

    And all of that traffic was subject to random sampling that was run by low-cost workers in other countries.

    They spy on your search, they listen to your mic, they track your movements and compare against known specialty health clinics.

    "I hadn't told anyone" is just "I didn't post it online". People just didn't realize Meta was listening to what they were saying in the room when they weren't even using the app.

    @ghouston @randahl

    1 Reply Last reply
    0
    • ghouston@mamot.frG ghouston@mamot.fr

      @androcat @Andii @randahl But what is the chatbot telling these users, some of them obsessed with their chatbot, perhaps willing to do anything to protect her from the likes of AI-hating infidels.

      androcat@toot.catA This user is from outside of this forum
      androcat@toot.catA This user is from outside of this forum
      androcat@toot.cat
      wrote sidst redigeret af
      #26

      @ghouston @Andii @randahl

      LLM derangement syndrome is the main threat from AI.

      It's not the models, it's the fucked up humans, as always.

      1 Reply Last reply
      0
      • madsenandersc@social.vivaldi.netM madsenandersc@social.vivaldi.net

        @randahl

        No, I don't agree that my stance on LLMs are easily identifiable from our conversation.

        Let's make a test: Describe how you think I feel about AI and LLMs in a paragraph, and then you have my word that I will truthfully describe how I use (or not) LLMs in my everyday life and where I see the dangers in it.

        And just to be clear: While being critical about a technology may be visible through public postings, the rest of your argument (having an affair, relationship with spouse and sister-in-law etc.) is not - and if it were, there would be no reason for someone to rely on any kind of AI to use it for blackmail.

        randahl@mastodon.socialR This user is from outside of this forum
        randahl@mastodon.socialR This user is from outside of this forum
        randahl@mastodon.social
        wrote sidst redigeret af
        #27

        @madsenandersc the reason you see my statement as [quote:] “pure bullshit” is, you and I are not in the same conversation.

        I opened this thread with a general prediction about the future capabilities of AI systems.

        You keep claiming I am wrong, because my post does not fully match your experience with the limitations of present day large language models — which (as you know) is just 1 of many different AI technologies.

        These are two very different conversations.

        1/2

        randahl@mastodon.socialR 1 Reply Last reply
        0
        • randahl@mastodon.socialR randahl@mastodon.social

          @madsenandersc the reason you see my statement as [quote:] “pure bullshit” is, you and I are not in the same conversation.

          I opened this thread with a general prediction about the future capabilities of AI systems.

          You keep claiming I am wrong, because my post does not fully match your experience with the limitations of present day large language models — which (as you know) is just 1 of many different AI technologies.

          These are two very different conversations.

          1/2

          randahl@mastodon.socialR This user is from outside of this forum
          randahl@mastodon.socialR This user is from outside of this forum
          randahl@mastodon.social
          wrote sidst redigeret af
          #28

          @madsenandersc
          …
          Now I agree with you, that there is a lot of hype surrounding LLMs, and I am certainly open two having a conversation about that. But please beware that the narrow goal posts of present day LLMs, were introduced by you in this conversation, not me.

          2/2

          madsenandersc@social.vivaldi.netM 1 Reply Last reply
          0
          • randahl@mastodon.socialR randahl@mastodon.social

            @madsenandersc
            …
            Now I agree with you, that there is a lot of hype surrounding LLMs, and I am certainly open two having a conversation about that. But please beware that the narrow goal posts of present day LLMs, were introduced by you in this conversation, not me.

            2/2

            madsenandersc@social.vivaldi.netM This user is from outside of this forum
            madsenandersc@social.vivaldi.netM This user is from outside of this forum
            madsenandersc@social.vivaldi.net
            wrote sidst redigeret af
            #29

            @randahl

            So you are talking about what LLMs may evolve into at some point in the future? Hmmm - I guess anything is possible, but we are still very far away from that point, to be honest.

            There is no way I can see LLMs with their current technology evolve into what you are describing - that would require a world where the AI has unobstructed access to anything you say or do, online or not, and that again would require your devices to be wide open for the AI.

            Also, it would require an AI that is much, much more capable of rational thinking than what we have today. I know there is a story going around about someone who asked their LLM to surprise them, and a day later it had created a phone number and called them, exclaiming "SURPRISE!", but I have still to see any evidence to support the story at all.

            I understand that there is a fear that Microsoft and Google is moving into that direction (Amazon as well, come to think of it), but it would require users to be absolutely indifferent to whatever large tech companies are trying to wrangle out of their devices, and I see things go in the exact opposite direction at the moment.

            That said, I could see US customers be screwed over by this, especially if privacy laws remain basically non-existent, but again - I see a movement in the opposite direction.

            violetmadder@kolektiva.socialV 1 Reply Last reply
            0
            • randahl@mastodon.socialR randahl@mastodon.social

              When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.

              A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.

              diana_european@mastodon.socialD This user is from outside of this forum
              diana_european@mastodon.socialD This user is from outside of this forum
              diana_european@mastodon.social
              wrote sidst redigeret af
              #30

              Exactly.

              1 Reply Last reply
              0
              • benfulton@mastodon.londonB benfulton@mastodon.london

                @randahl Which, like most everything AI, was predicted by Isaac Asimov. This time in the short story "Evitable Conflict" where the machines carefully remove human obstacles to their plans for the good of humanity.

                https://en.wikipedia.org/wiki/The_Evitable_Conflict

                #bookstodon #ai #scifi

                jimthewhyguy@techfieldday.netJ This user is from outside of this forum
                jimthewhyguy@techfieldday.netJ This user is from outside of this forum
                jimthewhyguy@techfieldday.net
                wrote sidst redigeret af
                #31

                @benfulton @randahl I'm actually using that short story in my upcoming keynote address in a few weeks. It's a Susan Calvin gem!

                1 Reply Last reply
                0
                • randahl@mastodon.socialR randahl@mastodon.social

                  When the AI models are complete, they will be able to predict which citizens are most likely to become key critics of AI, and which information about those citizens to used to destroy their lives.

                  A woman is about to write a book on AI, but she also had an affair three years ago, and revealing that information to her sister-in-law has a 97 percent probability of destroying her marriage, the book never being complete, and her never getting elected to Parliament to stop AI mass surveillance.

                  jimthewhyguy@techfieldday.netJ This user is from outside of this forum
                  jimthewhyguy@techfieldday.netJ This user is from outside of this forum
                  jimthewhyguy@techfieldday.net
                  wrote sidst redigeret af
                  #32

                  @randahl As I understand it, China has been using social scores for at least a decade now to punish its citizens when they use for what passes as social media there whenever they show resistance to the party line - e.g. restricting them from travel by limiting access to payment kiosks or other services.

                  LLMs may not be the direct tool governments would use, but there's plenty of surveillance techniques that would work perfectly

                  1 Reply Last reply
                  0
                  • madsenandersc@social.vivaldi.netM madsenandersc@social.vivaldi.net

                    @randahl

                    So you are talking about what LLMs may evolve into at some point in the future? Hmmm - I guess anything is possible, but we are still very far away from that point, to be honest.

                    There is no way I can see LLMs with their current technology evolve into what you are describing - that would require a world where the AI has unobstructed access to anything you say or do, online or not, and that again would require your devices to be wide open for the AI.

                    Also, it would require an AI that is much, much more capable of rational thinking than what we have today. I know there is a story going around about someone who asked their LLM to surprise them, and a day later it had created a phone number and called them, exclaiming "SURPRISE!", but I have still to see any evidence to support the story at all.

                    I understand that there is a fear that Microsoft and Google is moving into that direction (Amazon as well, come to think of it), but it would require users to be absolutely indifferent to whatever large tech companies are trying to wrangle out of their devices, and I see things go in the exact opposite direction at the moment.

                    That said, I could see US customers be screwed over by this, especially if privacy laws remain basically non-existent, but again - I see a movement in the opposite direction.

                    violetmadder@kolektiva.socialV This user is from outside of this forum
                    violetmadder@kolektiva.socialV This user is from outside of this forum
                    violetmadder@kolektiva.social
                    wrote sidst redigeret af
                    #33

                    @madsenandersc @randahl

                    "AI" is ALREADY being used to draw conclusions about people's personalities and behavior, and to target them for MUCH WORSE things than advertising.

                    It doesn't even MATTER whether or not the shit is actually any good at what it's doing-- our oppressors love the idea of a mechanized magic 8 ball that tells them who's an enemy of the state and they're going to use it, accuracy be damned.

                    https://www.democracynow.org/2024/4/5/israel_ai

                    madsenandersc@social.vivaldi.netM 1 Reply Last reply
                    0
                    • violetmadder@kolektiva.socialV violetmadder@kolektiva.social

                      @madsenandersc @randahl

                      "AI" is ALREADY being used to draw conclusions about people's personalities and behavior, and to target them for MUCH WORSE things than advertising.

                      It doesn't even MATTER whether or not the shit is actually any good at what it's doing-- our oppressors love the idea of a mechanized magic 8 ball that tells them who's an enemy of the state and they're going to use it, accuracy be damned.

                      https://www.democracynow.org/2024/4/5/israel_ai

                      madsenandersc@social.vivaldi.netM This user is from outside of this forum
                      madsenandersc@social.vivaldi.netM This user is from outside of this forum
                      madsenandersc@social.vivaldi.net
                      wrote sidst redigeret af
                      #34

                      @violetmadder @randahl

                      "A second AI system known as “Where’s Daddy?” tracked Palestinians on the kill list and was purposely designed to help Israel target individuals when they were at home at night with their families."

                      I'm not saying that this did not happen - I would just like to know HOW they were tracked?

                      AI is not some magic 8 ball that will tell you everything you want to know - you need some kind of technology that will give you the basic information, and THEN an AI system can do some calculations for you.

                      Were these individuals tricked into installing an app on their phones? Did Google and Apple provide the data? Amazon and their Alexa? - or did they simply conclude that most people spend their night in their home, and most likely in their bed?

                      How did the system determine who was a potential target? Did someone eavedrop on private messages? Real-time decryption of secure chats? Public postings on social media?

                      Regardless of how you look at this, the problem is not that some kind of AI (which simply isn't that intelligent - it can just recalculate things very quickly) was used - it is the collection of all the personal information that is then fed into the AI, that is the problem.

                      Prevent illegal information collection - then the AI becomes much, much less useful.

                      violetmadder@kolektiva.socialV 1 Reply Last reply
                      0
                      • madsenandersc@social.vivaldi.netM madsenandersc@social.vivaldi.net

                        @violetmadder @randahl

                        "A second AI system known as “Where’s Daddy?” tracked Palestinians on the kill list and was purposely designed to help Israel target individuals when they were at home at night with their families."

                        I'm not saying that this did not happen - I would just like to know HOW they were tracked?

                        AI is not some magic 8 ball that will tell you everything you want to know - you need some kind of technology that will give you the basic information, and THEN an AI system can do some calculations for you.

                        Were these individuals tricked into installing an app on their phones? Did Google and Apple provide the data? Amazon and their Alexa? - or did they simply conclude that most people spend their night in their home, and most likely in their bed?

                        How did the system determine who was a potential target? Did someone eavedrop on private messages? Real-time decryption of secure chats? Public postings on social media?

                        Regardless of how you look at this, the problem is not that some kind of AI (which simply isn't that intelligent - it can just recalculate things very quickly) was used - it is the collection of all the personal information that is then fed into the AI, that is the problem.

                        Prevent illegal information collection - then the AI becomes much, much less useful.

                        violetmadder@kolektiva.socialV This user is from outside of this forum
                        violetmadder@kolektiva.socialV This user is from outside of this forum
                        violetmadder@kolektiva.social
                        wrote sidst redigeret af
                        #35

                        @madsenandersc @randahl

                        Cellphones, security cameras, satellites, drones, social media, transaction records, yes-- you name it, all of it. Shit, if they REALLY feel like it, damn near any "smart" device can be used to snoop-- even if it doesn't have a mic per se, speakers can be used to listen too. Cellphones don't need to have particular apps installed to be hijacked by stingrays etc. Backdoors are everywhere, big tech has been treating privacy as a huge joke for years. Telegram is not secure, and even Signal probably isn't either.

                        That's a big part of what the massive data centers are for-- digesting the staggeringly enormous swaths of data that's being harvested from every possible source. In the US our social security data, probably ALL of it, got nabbed by DOGE. Hacks and breaches are exposing everyone's medical records and DNA tests and pretty much everything else.

                        And now they want us forking over even more personal information in the name of age verification-- they want our biometrics right down to fingerprints and facial scans and iris scans, everything.

                        What is or isn't "illegal" doesn't even matter anymore, fascists make that shit up as they go along. Ring and Flock cameras just hand their footage over to the feds freely. Google Nest or Home or whatever it's called claims that they don't store the footage unless you pay for a subscription-- but if a crime is recorded, suddenly, poof, turns out they DID have it all the entire time. Oh, and now hardware prices are going bonkers such that it may soon be very difficult to get a home PC that does its own computing, as we get herded towards using mere terminals that do everything on their cloud.

                        Goebbels would swoon.

                        And we've got gestapo that's scarcely bothering to be secret, grabbing random people off the street and disappearing them who knows where on flights that don't even turn on their transponders. How many people might get blackbagged just for saying the kind of shit I'm saying right now? How would we even tell?? We can hope that there are too many of us for them to squelch us, but for how long?

                        We're in very deep shit, and we can't just be sitting around waiting for courts and voting to magically fix it.

                        madsenandersc@social.vivaldi.netM 1 Reply Last reply
                        0
                        • violetmadder@kolektiva.socialV violetmadder@kolektiva.social

                          @madsenandersc @randahl

                          Cellphones, security cameras, satellites, drones, social media, transaction records, yes-- you name it, all of it. Shit, if they REALLY feel like it, damn near any "smart" device can be used to snoop-- even if it doesn't have a mic per se, speakers can be used to listen too. Cellphones don't need to have particular apps installed to be hijacked by stingrays etc. Backdoors are everywhere, big tech has been treating privacy as a huge joke for years. Telegram is not secure, and even Signal probably isn't either.

                          That's a big part of what the massive data centers are for-- digesting the staggeringly enormous swaths of data that's being harvested from every possible source. In the US our social security data, probably ALL of it, got nabbed by DOGE. Hacks and breaches are exposing everyone's medical records and DNA tests and pretty much everything else.

                          And now they want us forking over even more personal information in the name of age verification-- they want our biometrics right down to fingerprints and facial scans and iris scans, everything.

                          What is or isn't "illegal" doesn't even matter anymore, fascists make that shit up as they go along. Ring and Flock cameras just hand their footage over to the feds freely. Google Nest or Home or whatever it's called claims that they don't store the footage unless you pay for a subscription-- but if a crime is recorded, suddenly, poof, turns out they DID have it all the entire time. Oh, and now hardware prices are going bonkers such that it may soon be very difficult to get a home PC that does its own computing, as we get herded towards using mere terminals that do everything on their cloud.

                          Goebbels would swoon.

                          And we've got gestapo that's scarcely bothering to be secret, grabbing random people off the street and disappearing them who knows where on flights that don't even turn on their transponders. How many people might get blackbagged just for saying the kind of shit I'm saying right now? How would we even tell?? We can hope that there are too many of us for them to squelch us, but for how long?

                          We're in very deep shit, and we can't just be sitting around waiting for courts and voting to magically fix it.

                          madsenandersc@social.vivaldi.netM This user is from outside of this forum
                          madsenandersc@social.vivaldi.netM This user is from outside of this forum
                          madsenandersc@social.vivaldi.net
                          wrote sidst redigeret af
                          #36

                          @violetmadder @randahl

                          Ah - you must be american. Yeah, you're pretty much fucked, I'll give you that.

                          1 Reply Last reply
                          0
                          Svar
                          • Svar som emne
                          Login for at svare
                          • Ældste til nyeste
                          • Nyeste til ældste
                          • Most Votes


                          • Log ind

                          • Har du ikke en konto? Tilmeld

                          • Login or register to search.
                          Powered by NodeBB Contributors
                          Graciously hosted by data.coop
                          • First post
                            Last post
                          0
                          • Hjem
                          • Seneste
                          • Etiketter
                          • Populære
                          • Verden
                          • Bruger
                          • Grupper