Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. Today we had a fire alarm in the office.

Today we had a fire alarm in the office.

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
97 Indlæg 71 Posters 0 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • tagir_valeev@mastodon.onlineT tagir_valeev@mastodon.online

    Today we had a fire alarm in the office. A colleague wrote to a Slack channel 'Fire alarm in the office building', to start a thread if somebody knows any details. We have AI assistant Glean integrated into the Slack, and it answered privately to her: "today's siren is just a scheduled test and you do not need to leave your workplace". It was not a test or a drill, it was a real fire alarm. Someday, AI will kill us.

    photo55@mastodon.socialP This user is from outside of this forum
    photo55@mastodon.socialP This user is from outside of this forum
    photo55@mastodon.social
    wrote sidst redigeret af
    #86

    @tagir_valeev
    I think the health and safety executive should take note of that.
    I also think that someone wrote and someone released that software, even if they did it in such a way thru don't know how to make it correct.

    _They_ will have killed someone.

    1 Reply Last reply
    0
    • tagir_valeev@mastodon.onlineT tagir_valeev@mastodon.online

      Today we had a fire alarm in the office. A colleague wrote to a Slack channel 'Fire alarm in the office building', to start a thread if somebody knows any details. We have AI assistant Glean integrated into the Slack, and it answered privately to her: "today's siren is just a scheduled test and you do not need to leave your workplace". It was not a test or a drill, it was a real fire alarm. Someday, AI will kill us.

      D This user is from outside of this forum
      D This user is from outside of this forum
      davetheresurrector@mastodon.social
      wrote sidst redigeret af
      #87

      @tagir_valeev Repeated exposure to gaffes like this should train everyone to disbelieve everything with even a whiff of AI.

      1 Reply Last reply
      0
      • mason@partychickens.netM mason@partychickens.net

        @tagir_valeev

        From those wild-eyed radicals at Reuters:

        https://www.reuters.com/investigations/ai-enters-operating-room-reports-arise-botched-surgeries-misidentified-body-2026-02-09/

        titia@toot.communityT This user is from outside of this forum
        titia@toot.communityT This user is from outside of this forum
        titia@toot.community
        wrote sidst redigeret af
        #88

        @mason @tagir_valeev
        "The surgeon warned Hopkins and Acclarent “that there were issues that needed to be resolved,” the complaint adds. Despite that warning, the suit claims, Acclarent “lowered its safety standards to rush the new technology to market,” and set “as a goal only 80% accuracy for some of this new technology before integrating it into the TruDi Navigation System.”"

        Would you trust a doctor whose success rate in the operation theatre was just 80%? 🫣😱

        1 Reply Last reply
        0
        • terrybtwo@ohai.socialT terrybtwo@ohai.social

          @tagir_valeev As noted-the implication.of "AI will kill us " is precisely that.

          swiftone@mastodon.onlineS This user is from outside of this forum
          swiftone@mastodon.onlineS This user is from outside of this forum
          swiftone@mastodon.online
          wrote sidst redigeret af
          #89

          @TerryBTwo @tagir_valeev No one would ask this question while shuffling down a crowded stairwell and turn back? Or be in the bathroom and hoping not to have to rush out?

          It's weird to assume that people won't be a little lazy when LLMs are so successful BECAUSE people are eager to be lazy. Also weird to focus only on a very specific event as how "AI will kill us" when we already have machines driving cars into people, deleting production databases, encouraging people to kill themselves, etc.

          1 Reply Last reply
          0
          • tagir_valeev@mastodon.onlineT tagir_valeev@mastodon.online

            Today we had a fire alarm in the office. A colleague wrote to a Slack channel 'Fire alarm in the office building', to start a thread if somebody knows any details. We have AI assistant Glean integrated into the Slack, and it answered privately to her: "today's siren is just a scheduled test and you do not need to leave your workplace". It was not a test or a drill, it was a real fire alarm. Someday, AI will kill us.

            janisf@mstdn.socialJ This user is from outside of this forum
            janisf@mstdn.socialJ This user is from outside of this forum
            janisf@mstdn.social
            wrote sidst redigeret af
            #90

            @tagir_valeev
            I look at it verification Darwinism. Whoever (in this e.g.) f*d Glean to make Slacky baby agent gave instructions to rank who's least/most ready for AI transition. The prompter/f*er clearly didn't specify list length. Most agents have efficiency built in, so the one sentence killed two Tweeters with one stone.

            It's coming for us, so maybe something like, "List reasons for assertion." Or even, "You're full of shit. Fight me." Better, "Verify."

            (Toot relevant for ten days)

            janisf@mstdn.socialJ 2 Replies Last reply
            0
            • janisf@mstdn.socialJ janisf@mstdn.social

              @tagir_valeev
              I look at it verification Darwinism. Whoever (in this e.g.) f*d Glean to make Slacky baby agent gave instructions to rank who's least/most ready for AI transition. The prompter/f*er clearly didn't specify list length. Most agents have efficiency built in, so the one sentence killed two Tweeters with one stone.

              It's coming for us, so maybe something like, "List reasons for assertion." Or even, "You're full of shit. Fight me." Better, "Verify."

              (Toot relevant for ten days)

              janisf@mstdn.socialJ This user is from outside of this forum
              janisf@mstdn.socialJ This user is from outside of this forum
              janisf@mstdn.social
              wrote sidst redigeret af
              #91

              @tagir_valeev
              Whatever the tech bros want is what the AI will try to do, but the jelly is /probably/ going to be on the bottom of that sandwich. The AI/agent has no way to predict... except when we tell it. This is but one part of Walz told everyone to get heinous ICE behavior on video. AI doesn't trust, but the same interchanges we have with people that build trust will ID us closer or further away from what other people verify... the stuff the mysteriously networked AIs "thinks" is true.

              1 Reply Last reply
              0
              • janisf@mstdn.socialJ janisf@mstdn.social

                @tagir_valeev
                I look at it verification Darwinism. Whoever (in this e.g.) f*d Glean to make Slacky baby agent gave instructions to rank who's least/most ready for AI transition. The prompter/f*er clearly didn't specify list length. Most agents have efficiency built in, so the one sentence killed two Tweeters with one stone.

                It's coming for us, so maybe something like, "List reasons for assertion." Or even, "You're full of shit. Fight me." Better, "Verify."

                (Toot relevant for ten days)

                janisf@mstdn.socialJ This user is from outside of this forum
                janisf@mstdn.socialJ This user is from outside of this forum
                janisf@mstdn.social
                wrote sidst redigeret af
                #92

                @tagir_valeev
                In other words, we need to stop running and fight for the truth within the tools, or it will behave like fascism, because it was designed to, because of its parents, and because it never got a non-fascist non-parent adult to take it to a "questionable" play and then dinner to discuss it.

                I'm also encouraging single adults to volunteer for extracurricular activity participation/leadership. Public schools need it, and too many private schools don't want it. 😉

                1 Reply Last reply
                0
                • tagir_valeev@mastodon.onlineT tagir_valeev@mastodon.online

                  @metacosm nobody asked the AI input at all. It just was configured in the particular channel to answer automatically if it thinks it can help faster than fellow humans (sometimes people actually ask something which was asked before, so AI could be helpful). The configuration will be adjusted after this incident.

                  rhelune@todon.euR This user is from outside of this forum
                  rhelune@todon.euR This user is from outside of this forum
                  rhelune@todon.eu
                  wrote sidst redigeret af
                  #93

                  @tagir_valeev @metacosm You need to stop anthropomorphising LLMs. LLMs do not think! They do not even hallucinate! They just spit out the most probable next tokens from their training set, and the training set is all of the human knowledge plagiarized + all of the human bullshit their crawlers could find on the web! If everyone turns them off, there will be fewer fires in the future (in both senses)!

                  1 Reply Last reply
                  0
                  • tomminieminen@mastodontti.fiT tomminieminen@mastodontti.fi

                    @tagir_valeev And no one will be held accountable for the losses of life, because AI cannot be prosecuted? 😡

                    rhelune@todon.euR This user is from outside of this forum
                    rhelune@todon.euR This user is from outside of this forum
                    rhelune@todon.eu
                    wrote sidst redigeret af
                    #94

                    @tomminieminen @tagir_valeev The legal person selling it can.

                    1 Reply Last reply
                    0
                    • majick@mefi.socialM majick@mefi.social

                      @tagir_valeev More galling still, a scheduled test of a fire alarm system typically *still includes evacuation.* Leaving the building *is* the drill. I have never worked in an office where there was any condition under which occupants are told to ignore the alarm.

                      Ignoring alarms leads to alarm fatigue which then leads to failure. Alarms either exist for a reason or they don't. A device that says otherwise is a broken device. You're right, devices like that will kill.

                      gothytim@tech.lgbtG This user is from outside of this forum
                      gothytim@tech.lgbtG This user is from outside of this forum
                      gothytim@tech.lgbt
                      wrote sidst redigeret af
                      #95

                      @majick @tagir_valeev

                      Fwiw I’ve worked in buildings with a regularly-scheduled alarm test (same time and day every week) which you were expected to ignore, reporting if there was a fault. It was preceded by a recorded announcement saying it was a rest, and followed by one saying the test was over and any further alarms should be responded to normally by evacuating.

                      (The drills where you do leave are more common, of course.)

                      1 Reply Last reply
                      0
                      • tagir_valeev@mastodon.onlineT tagir_valeev@mastodon.online

                        Today we had a fire alarm in the office. A colleague wrote to a Slack channel 'Fire alarm in the office building', to start a thread if somebody knows any details. We have AI assistant Glean integrated into the Slack, and it answered privately to her: "today's siren is just a scheduled test and you do not need to leave your workplace". It was not a test or a drill, it was a real fire alarm. Someday, AI will kill us.

                        ponygirl@mastodon.socialP This user is from outside of this forum
                        ponygirl@mastodon.socialP This user is from outside of this forum
                        ponygirl@mastodon.social
                        wrote sidst redigeret af
                        #96

                        @tagir_valeev Nothing good will come from AI.

                        1 Reply Last reply
                        0
                        • tagir_valeev@mastodon.onlineT tagir_valeev@mastodon.online

                          Today we had a fire alarm in the office. A colleague wrote to a Slack channel 'Fire alarm in the office building', to start a thread if somebody knows any details. We have AI assistant Glean integrated into the Slack, and it answered privately to her: "today's siren is just a scheduled test and you do not need to leave your workplace". It was not a test or a drill, it was a real fire alarm. Someday, AI will kill us.

                          aissen@social.treehouse.systemsA This user is from outside of this forum
                          aissen@social.treehouse.systemsA This user is from outside of this forum
                          aissen@social.treehouse.systems
                          wrote sidst redigeret af
                          #97

                          @tagir_valeev were the sources in "Show sources" examined? Did it find anything interesting?

                          1 Reply Last reply
                          0
                          • kramse@helvede.netK kramse@helvede.net shared this topic
                          Svar
                          • Svar som emne
                          Login for at svare
                          • Ældste til nyeste
                          • Nyeste til ældste
                          • Most Votes


                          • Log ind

                          • Har du ikke en konto? Tilmeld

                          • Login or register to search.
                          Powered by NodeBB Contributors
                          Graciously hosted by data.coop
                          • First post
                            Last post
                          0
                          • Hjem
                          • Seneste
                          • Etiketter
                          • Populære
                          • Verden
                          • Bruger
                          • Grupper