Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. Health experts: Your synthetic text "AI" overviews are misleading, for example see this about liver function tests.

Health experts: Your synthetic text "AI" overviews are misleading, for example see this about liver function tests.

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
53 Indlæg 35 Posters 0 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • npars01@mstdn.socialN npars01@mstdn.social

    @jplebreton @emilymbender

    The question is also "LLM'S are useful to whom?"

    The wealthiest seem overjoyed with it so far.

    So much so, they are funding one of the largest coercive & forced user adoption campaign in history.

    https://www.forbes.com/sites/mattdurot/2025/07/17/bill-gates-charles-koch-and-three-other-billionaires-are-giving-1-billion-to-enhance-economic-mobility-in-the-us/

    It's the best at:
    1. Election interference
    2. Malign influence campaigns
    3. Automated cyberwarfare
    4. Manipulation of public sentiment
    5. Automated hate campaigns
    6. Plausible deniability for funding a fascist movement
    7. Frying the planet

    tomashradcky@mstdn.socialT This user is from outside of this forum
    tomashradcky@mstdn.socialT This user is from outside of this forum
    tomashradcky@mstdn.social
    wrote sidst redigeret af
    #38

    @Npars01 Boy are they not even subtly advertising their running from the coming guillotines. Just cracks me up how little they think they need to spend to avoid their own collapse.

    1 Reply Last reply
    0
    • jplebreton@mastodon.socialJ jplebreton@mastodon.social

      @emilymbender we're living through a mass psychological engineering campaign and the results have been, and will continue to be, horrifying https://azhdarchid.com/are-llms-useful

      futureisfoss@fosstodon.orgF This user is from outside of this forum
      futureisfoss@fosstodon.orgF This user is from outside of this forum
      futureisfoss@fosstodon.org
      wrote sidst redigeret af
      #39

      @jplebreton @emilymbender

      That blog post was an interesting read, thanks for sharing.

      1 Reply Last reply
      0
      • emilymbender@dair-community.socialE emilymbender@dair-community.social

        Health experts: Your synthetic text "AI" overviews are misleading, for example see this about liver function tests.

        Google: Okay, we'll block "AI" overviews on that query.

        The product is fundamentally flawed and cannot be "fixed" by patching query by query.

        A short 🧵>>

        https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation

        michaelmcwilliams@mas.toM This user is from outside of this forum
        michaelmcwilliams@mas.toM This user is from outside of this forum
        michaelmcwilliams@mas.to
        wrote sidst redigeret af
        #40

        @emilymbender The developers do not know what answer an AI will give to a specific prompt in advance. It's a black box of answers. Therefore there is no QA of the product. It is unpredictable and therefore dangerous. Need I continue?

        1 Reply Last reply
        0
        • npars01@mstdn.socialN npars01@mstdn.social

          @jplebreton @emilymbender

          The question is also "LLM'S are useful to whom?"

          The wealthiest seem overjoyed with it so far.

          So much so, they are funding one of the largest coercive & forced user adoption campaign in history.

          https://www.forbes.com/sites/mattdurot/2025/07/17/bill-gates-charles-koch-and-three-other-billionaires-are-giving-1-billion-to-enhance-economic-mobility-in-the-us/

          It's the best at:
          1. Election interference
          2. Malign influence campaigns
          3. Automated cyberwarfare
          4. Manipulation of public sentiment
          5. Automated hate campaigns
          6. Plausible deniability for funding a fascist movement
          7. Frying the planet

          coracinho@sunbeam.cityC This user is from outside of this forum
          coracinho@sunbeam.cityC This user is from outside of this forum
          coracinho@sunbeam.city
          wrote sidst redigeret af
          #41

          @Npars01 ah yes, bill gates, the "good billionaire"

          npars01@mstdn.socialN 1 Reply Last reply
          0
          • emilymbender@dair-community.socialE emilymbender@dair-community.social

            Health experts: Your synthetic text "AI" overviews are misleading, for example see this about liver function tests.

            Google: Okay, we'll block "AI" overviews on that query.

            The product is fundamentally flawed and cannot be "fixed" by patching query by query.

            A short 🧵>>

            https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation

            megatronicthronbanks@mastodon.socialM This user is from outside of this forum
            megatronicthronbanks@mastodon.socialM This user is from outside of this forum
            megatronicthronbanks@mastodon.social
            wrote sidst redigeret af
            #42

            @emilymbender The theme to I DREAM OF JEANIE is now playing in your head!
            It's outa that bottle!

            1 Reply Last reply
            0
            • emilymbender@dair-community.socialE emilymbender@dair-community.social

              I've been shouting about this since Google first floated the idea of LLMs as a replacement for search in 2021. Synthetic text is not a suitable tool for information access!

              https://buttondown.com/maiht3k/archive/information-literacy-and-chatbots-as-search/

              /fin (for now)

              adrianco@mastodon.socialA This user is from outside of this forum
              adrianco@mastodon.socialA This user is from outside of this forum
              adrianco@mastodon.social
              wrote sidst redigeret af
              #43

              @emilymbender Right now I’m finding that when I have a conversation with chatGPT about a topic it’s a much better and more accurate and useful experience than using Google or DuckDuckGo searches. It includes links to sources but it also has a useful wider context that informs where it looks (like whether I’m looking for UK or USA based info), and is persistent, so I can come back to the topic months later and continue.

              bobthomson70@mastodon.socialB adrianco@mastodon.socialA 2 Replies Last reply
              0
              • coracinho@sunbeam.cityC coracinho@sunbeam.city

                @Npars01 ah yes, bill gates, the "good billionaire"

                npars01@mstdn.socialN This user is from outside of this forum
                npars01@mstdn.socialN This user is from outside of this forum
                npars01@mstdn.social
                wrote sidst redigeret af
                #44

                @coracinho

                Lol. Are there good billionaires?

                1 Reply Last reply
                0
                • adrianco@mastodon.socialA adrianco@mastodon.social

                  @emilymbender Right now I’m finding that when I have a conversation with chatGPT about a topic it’s a much better and more accurate and useful experience than using Google or DuckDuckGo searches. It includes links to sources but it also has a useful wider context that informs where it looks (like whether I’m looking for UK or USA based info), and is persistent, so I can come back to the topic months later and continue.

                  bobthomson70@mastodon.socialB This user is from outside of this forum
                  bobthomson70@mastodon.socialB This user is from outside of this forum
                  bobthomson70@mastodon.social
                  wrote sidst redigeret af
                  #45

                  @adrianco @emilymbender that’s because the an LLM model is now more effective than the broken Google Search algorithm is at surfacing the better results out of all the Gen AI slop the web is now drowning in.
                  I don’t have conversations because there isn’t anything to converse with, but i just type what I would have done into Google back when it was useful. I find it supremely ironic that this is my main use case for LLM.

                  1 Reply Last reply
                  0
                  • emilymbender@dair-community.socialE emilymbender@dair-community.social

                    Health experts: Your synthetic text "AI" overviews are misleading, for example see this about liver function tests.

                    Google: Okay, we'll block "AI" overviews on that query.

                    The product is fundamentally flawed and cannot be "fixed" by patching query by query.

                    A short 🧵>>

                    https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation

                    rich@mastodon.gamedev.placeR This user is from outside of this forum
                    rich@mastodon.gamedev.placeR This user is from outside of this forum
                    rich@mastodon.gamedev.place
                    wrote sidst redigeret af
                    #46

                    @emilymbender It's fine they have n-dimensional guardrails 🤡

                    1 Reply Last reply
                    0
                    • joeinwynnewood@mstdn.socialJ joeinwynnewood@mstdn.social

                      @iams

                      Sounds like turning off storage would be the likely culprit. If you don't store the settings, they will have to revert to the default.

                      @emilymbender

                      iams@mastodon.socialI This user is from outside of this forum
                      iams@mastodon.socialI This user is from outside of this forum
                      iams@mastodon.social
                      wrote sidst redigeret af
                      #47

                      @joeinwynnewood @emilymbender Correct, which is why I think that the default position is that DDG doesn't actually provide an opt-out, despite having a toggle switch which says otherwise. If my anonymity or preference is enforced only after presenting an ID and pulling a personal file that 'aint an opt-out. Chart the decision-tree out and the opt-out works for one specific path, coincidentally the path of idgaf and rawdog the internet. But, yeah, they have a toggle.

                      iams@mastodon.socialI 1 Reply Last reply
                      0
                      • iams@mastodon.socialI iams@mastodon.social

                        @joeinwynnewood @emilymbender Correct, which is why I think that the default position is that DDG doesn't actually provide an opt-out, despite having a toggle switch which says otherwise. If my anonymity or preference is enforced only after presenting an ID and pulling a personal file that 'aint an opt-out. Chart the decision-tree out and the opt-out works for one specific path, coincidentally the path of idgaf and rawdog the internet. But, yeah, they have a toggle.

                        iams@mastodon.socialI This user is from outside of this forum
                        iams@mastodon.socialI This user is from outside of this forum
                        iams@mastodon.social
                        wrote sidst redigeret af
                        #48

                        @joeinwynnewood @emilymbender I'm sorry. That was uncalled for. I am tired. I am a cog in a machine that keeps people alive, and though only a cog I share the same planet and have a personal life, like you, and like the developersnof DDG -- look, that sloppiness wouldn't stand in my workplace. Things have to do, or do not, and withstand and adapt. The annoyance is supreme, personally. So I get frustrated and shout at strangers. Context, I guess. Anyhow, apologies and good day/night.

                        1 Reply Last reply
                        0
                        • jplebreton@mastodon.socialJ jplebreton@mastodon.social

                          @Npars01 @emilymbender yeah, it's clearly one of those technologies that accelerates just about everything patriarchal white supremacist capitalism does in various ways, and provides a greater means of plausible deniability to the people behind that than previous systems. it enables the monsters running the world to Capitalism Harder, at the exact moment when we need to be doing the opposite and take better care of one another and our planet. so in that sense it's definitely working as designed.

                          lritter@mastodon.gamedev.placeL This user is from outside of this forum
                          lritter@mastodon.gamedev.placeL This user is from outside of this forum
                          lritter@mastodon.gamedev.place
                          wrote sidst redigeret af
                          #49

                          @jplebreton @Npars01 @emilymbender or not designed, as this shit was ripped from labs before it was ready, because moneydudes got the fomo. the devil finds work in underengineered solutions.

                          1 Reply Last reply
                          0
                          • emilymbender@dair-community.socialE emilymbender@dair-community.social

                            Health experts: Your synthetic text "AI" overviews are misleading, for example see this about liver function tests.

                            Google: Okay, we'll block "AI" overviews on that query.

                            The product is fundamentally flawed and cannot be "fixed" by patching query by query.

                            A short 🧵>>

                            https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation

                            infoseepage@mastodon.socialI This user is from outside of this forum
                            infoseepage@mastodon.socialI This user is from outside of this forum
                            infoseepage@mastodon.social
                            wrote sidst redigeret af
                            #50

                            @emilymbender I play a game now and then with the latest and greatest LLMs to see how easy it is to get them to make me a recipe which includes something grossly poisonous. They still fail badly.

                            1 Reply Last reply
                            0
                            • adrianco@mastodon.socialA adrianco@mastodon.social

                              @emilymbender Right now I’m finding that when I have a conversation with chatGPT about a topic it’s a much better and more accurate and useful experience than using Google or DuckDuckGo searches. It includes links to sources but it also has a useful wider context that informs where it looks (like whether I’m looking for UK or USA based info), and is persistent, so I can come back to the topic months later and continue.

                              adrianco@mastodon.socialA This user is from outside of this forum
                              adrianco@mastodon.socialA This user is from outside of this forum
                              adrianco@mastodon.social
                              wrote sidst redigeret af
                              #51

                              @emilymbender For example, here’s a conversation about what species of bat I’m looking at. It’s a much better experience than web search. Regardless of how accurate it is, the experience is going to drive usage. However it asked good clarifying questions and the answers are correct as far as I can tell. https://chatgpt.com/share/6964daa3-4a64-8009-86e9-4a1b804998a7

                              1 Reply Last reply
                              0
                              • emilymbender@dair-community.socialE emilymbender@dair-community.social

                                Health experts: Your synthetic text "AI" overviews are misleading, for example see this about liver function tests.

                                Google: Okay, we'll block "AI" overviews on that query.

                                The product is fundamentally flawed and cannot be "fixed" by patching query by query.

                                A short 🧵>>

                                https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation

                                foxcj@mastodon.socialF This user is from outside of this forum
                                foxcj@mastodon.socialF This user is from outside of this forum
                                foxcj@mastodon.social
                                wrote sidst redigeret af
                                #52

                                @emilymbender That Guardian headline is itself misleading; it should say something like, "Google's false and inaccurate AI overviews continue to put users' health at risk."

                                1 Reply Last reply
                                0
                                • emilymbender@dair-community.socialE emilymbender@dair-community.social

                                  I've been shouting about this since Google first floated the idea of LLMs as a replacement for search in 2021. Synthetic text is not a suitable tool for information access!

                                  https://buttondown.com/maiht3k/archive/information-literacy-and-chatbots-as-search/

                                  /fin (for now)

                                  Q This user is from outside of this forum
                                  Q This user is from outside of this forum
                                  qybat@batchats.net
                                  wrote sidst redigeret af
                                  #53

                                  @emilymbender The world cries out for a better search. One that can work even on an Internet full of malicious SEO engineered to generate false positives for fake reviews and other scam sites. Such a technology is desperately needed. Unfortunately we got LLMs instead, which exchange one set of problems for another: They can only return information that is true most of the time.

                                  1 Reply Last reply
                                  0
                                  • jwcph@helvede.netJ jwcph@helvede.net shared this topic
                                  Svar
                                  • Svar som emne
                                  Login for at svare
                                  • Ældste til nyeste
                                  • Nyeste til ældste
                                  • Most Votes


                                  • Log ind

                                  • Har du ikke en konto? Tilmeld

                                  • Login or register to search.
                                  Powered by NodeBB Contributors
                                  Graciously hosted by data.coop
                                  • First post
                                    Last post
                                  0
                                  • Hjem
                                  • Seneste
                                  • Etiketter
                                  • Populære
                                  • Verden
                                  • Bruger
                                  • Grupper