Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. This post did not contain any content.

This post did not contain any content.

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
23 Indlæg 12 Posters 0 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • cr0w@infosec.exchangeC cr0w@infosec.exchange
    This post did not contain any content.
    tuban_muzuru@beige.partyT This user is from outside of this forum
    tuban_muzuru@beige.partyT This user is from outside of this forum
    tuban_muzuru@beige.party
    wrote sidst redigeret af
    #3

    @cR0w

    [quiet laughter]

    the biggest security risk, anywhere - is people.

    Change your mind, right away.

    cr0w@infosec.exchangeC 1 Reply Last reply
    0
    • tuban_muzuru@beige.partyT tuban_muzuru@beige.party

      @cR0w

      [quiet laughter]

      the biggest security risk, anywhere - is people.

      Change your mind, right away.

      cr0w@infosec.exchangeC This user is from outside of this forum
      cr0w@infosec.exchangeC This user is from outside of this forum
      cr0w@infosec.exchange
      wrote sidst redigeret af
      #4

      @tuban_muzuru Indeed. But given the fact that I'm not familiar with your thoughts on the matter, there is a good chance that you and I disagree on which people are the greatest risk.

      1 Reply Last reply
      0
      • cr0w@infosec.exchangeC cr0w@infosec.exchange
        This post did not contain any content.
        B This user is from outside of this forum
        B This user is from outside of this forum
        bakachu@infosec.exchange
        wrote sidst redigeret af
        #5

        @cR0w and third parties using AI is worse than both put together

        cr0w@infosec.exchangeC 1 Reply Last reply
        0
        • pelle@veganism.socialP pelle@veganism.social shared this topic
        • cr0w@infosec.exchangeC cr0w@infosec.exchange
          This post did not contain any content.
          beyondmachines1@infosec.exchangeB This user is from outside of this forum
          beyondmachines1@infosec.exchangeB This user is from outside of this forum
          beyondmachines1@infosec.exchange
          wrote sidst redigeret af
          #6

          @cR0w Why?
          not trying to change your mind, but interested to read the thought process

          cr0w@infosec.exchangeC 1 Reply Last reply
          0
          • cr0w@infosec.exchangeC cr0w@infosec.exchange
            This post did not contain any content.
            kirby@freerobuxextremist.comK This user is from outside of this forum
            kirby@freerobuxextremist.comK This user is from outside of this forum
            kirby@freerobuxextremist.com
            wrote sidst redigeret af
            #7
            @cR0w i don't think anybody ever denied this, actually this is the common opinion
            cr0w@infosec.exchangeC 1 Reply Last reply
            0
            • cr0w@infosec.exchangeC cr0w@infosec.exchange
              This post did not contain any content.
              alerikaisattera@fosstodon.orgA This user is from outside of this forum
              alerikaisattera@fosstodon.orgA This user is from outside of this forum
              alerikaisattera@fosstodon.org
              wrote sidst redigeret af
              #8

              @cR0w more like idiots using AI is a greater risk

              cr0w@infosec.exchangeC 1 Reply Last reply
              0
              • cr0w@infosec.exchangeC cr0w@infosec.exchange
                This post did not contain any content.
                loke@functional.cafeL This user is from outside of this forum
                loke@functional.cafeL This user is from outside of this forum
                loke@functional.cafe
                wrote sidst redigeret af
                #9

                @cR0w attackers only need to succeed once. Defenders need to succeed every time.

                When using a broad system with lots of capabilities, but also stochastic and unpredictable, guess for which side it's the most useful?

                cr0w@infosec.exchangeC 1 Reply Last reply
                0
                • cr0w@infosec.exchangeC cr0w@infosec.exchange
                  This post did not contain any content.
                  nyanbinary@infosec.exchangeN This user is from outside of this forum
                  nyanbinary@infosec.exchangeN This user is from outside of this forum
                  nyanbinary@infosec.exchange
                  wrote sidst redigeret af
                  #10

                  @cR0w where does "execs using AI" rank in this?

                  cr0w@infosec.exchangeC 1 Reply Last reply
                  0
                  • B bakachu@infosec.exchange

                    @cR0w and third parties using AI is worse than both put together

                    cr0w@infosec.exchangeC This user is from outside of this forum
                    cr0w@infosec.exchangeC This user is from outside of this forum
                    cr0w@infosec.exchange
                    wrote sidst redigeret af
                    #11

                    @bakachu Maybe. It depends on the org's reliance on said third parties.

                    B 1 Reply Last reply
                    0
                    • cr0w@infosec.exchangeC cr0w@infosec.exchange

                      @bakachu Maybe. It depends on the org's reliance on said third parties.

                      B This user is from outside of this forum
                      B This user is from outside of this forum
                      bakachu@infosec.exchange
                      wrote sidst redigeret af
                      #12

                      @cR0w true, i speak under the influence of bias and exhaustion

                      1 Reply Last reply
                      0
                      • beyondmachines1@infosec.exchangeB beyondmachines1@infosec.exchange

                        @cR0w Why?
                        not trying to change your mind, but interested to read the thought process

                        cr0w@infosec.exchangeC This user is from outside of this forum
                        cr0w@infosec.exchangeC This user is from outside of this forum
                        cr0w@infosec.exchange
                        wrote sidst redigeret af
                        #13

                        @beyondmachines1 AI tools used by attackers has not materially impacted capabilities beyond scope and scale, but that does not change the likelihood of occurrence or the severity of impact to orgs who were already modeling their risk based on the state of the art threats, which should be everyone at this point. Defenders relying on nondeterministic and unaccountable systems are inevitable going to miss things due to the way existing AI tools work.

                        beyondmachines1@infosec.exchangeB 1 Reply Last reply
                        0
                        • kirby@freerobuxextremist.comK kirby@freerobuxextremist.com
                          @cR0w i don't think anybody ever denied this, actually this is the common opinion
                          cr0w@infosec.exchangeC This user is from outside of this forum
                          cr0w@infosec.exchangeC This user is from outside of this forum
                          cr0w@infosec.exchange
                          wrote sidst redigeret af
                          #14

                          @kirby It may be common on fedi but it certainly isn't common in my experience in industry. I'm surrounded by AI-enabled attacker pearl clutchers and tech bros promising to save the world with their AI SOC magic beans.

                          1 Reply Last reply
                          0
                          • alerikaisattera@fosstodon.orgA alerikaisattera@fosstodon.org

                            @cR0w more like idiots using AI is a greater risk

                            cr0w@infosec.exchangeC This user is from outside of this forum
                            cr0w@infosec.exchangeC This user is from outside of this forum
                            cr0w@infosec.exchange
                            wrote sidst redigeret af
                            #15

                            @alerikaisattera Is there another term for people using AI if they aren't required to?

                            1 Reply Last reply
                            0
                            • loke@functional.cafeL loke@functional.cafe

                              @cR0w attackers only need to succeed once. Defenders need to succeed every time.

                              When using a broad system with lots of capabilities, but also stochastic and unpredictable, guess for which side it's the most useful?

                              cr0w@infosec.exchangeC This user is from outside of this forum
                              cr0w@infosec.exchangeC This user is from outside of this forum
                              cr0w@infosec.exchange
                              wrote sidst redigeret af
                              #16

                              @loke Attackers only need to succeed once for initial access but defenders only need to be right once to mitigate after initial access. Those cute little bugs being found by the multi billion dollar AI systems do not imply any legitimate offensive capabilities.

                              1 Reply Last reply
                              0
                              • nyanbinary@infosec.exchangeN nyanbinary@infosec.exchange

                                @cR0w where does "execs using AI" rank in this?

                                cr0w@infosec.exchangeC This user is from outside of this forum
                                cr0w@infosec.exchangeC This user is from outside of this forum
                                cr0w@infosec.exchange
                                wrote sidst redigeret af
                                #17

                                @nyanbinary Tippy top. Highest risk to the org.

                                1 Reply Last reply
                                0
                                • cr0w@infosec.exchangeC cr0w@infosec.exchange
                                  This post did not contain any content.
                                  azuaron@cyberpunk.lolA This user is from outside of this forum
                                  azuaron@cyberpunk.lolA This user is from outside of this forum
                                  azuaron@cyberpunk.lol
                                  wrote sidst redigeret af
                                  #18

                                  @cR0w Blue Team practically working for Red Team.

                                  1 Reply Last reply
                                  0
                                  • cr0w@infosec.exchangeC cr0w@infosec.exchange
                                    This post did not contain any content.
                                    demonhouser@kind.socialD This user is from outside of this forum
                                    demonhouser@kind.socialD This user is from outside of this forum
                                    demonhouser@kind.social
                                    wrote sidst redigeret af
                                    #19

                                    @cR0w I actually disagree on this one, with a caveat.

                                    If the AI is only allowed to block, not allow, and is part of a layered system that includes traditional safeties, then there is no practical harm in adding AI to a toolset (AI is bad morally, but that's not my point here).

                                    Machine learning has been used to detect IoC's for a while now, I know SentinelOne was announcing that capability around 2019 (the MSP I worked for used them so I got their newsletter).

                                    1/2

                                    demonhouser@kind.socialD cr0w@infosec.exchangeC 2 Replies Last reply
                                    0
                                    • cr0w@infosec.exchangeC cr0w@infosec.exchange
                                      This post did not contain any content.
                                      chillybot@infosec.exchangeC This user is from outside of this forum
                                      chillybot@infosec.exchangeC This user is from outside of this forum
                                      chillybot@infosec.exchange
                                      wrote sidst redigeret af
                                      #20

                                      @cR0w
                                      1000%. AI favors the defenders my moopsy robot ass. For red teaming just kinda working is good enough blue team doesn't have those luxuries. And that's not even including the attack surface of AI itself

                                      1 Reply Last reply
                                      0
                                      • demonhouser@kind.socialD demonhouser@kind.social

                                        @cR0w I actually disagree on this one, with a caveat.

                                        If the AI is only allowed to block, not allow, and is part of a layered system that includes traditional safeties, then there is no practical harm in adding AI to a toolset (AI is bad morally, but that's not my point here).

                                        Machine learning has been used to detect IoC's for a while now, I know SentinelOne was announcing that capability around 2019 (the MSP I worked for used them so I got their newsletter).

                                        1/2

                                        demonhouser@kind.socialD This user is from outside of this forum
                                        demonhouser@kind.socialD This user is from outside of this forum
                                        demonhouser@kind.social
                                        wrote sidst redigeret af
                                        #21

                                        @cR0w this also doesn't consider user feelings, because false positives are definitely more likely if using an AI or machine learning element, but I tend to err on the side of false positives fine, false negatives bad no matter the impact.

                                        Again, this is not apologetics for how garbage and damaging AI companies are, because they are very much both of those things, but from a pure performance and security standpoint, structured, layered use of AI to detect and block intrusions can work fine.

                                        1 Reply Last reply
                                        0
                                        • demonhouser@kind.socialD demonhouser@kind.social

                                          @cR0w I actually disagree on this one, with a caveat.

                                          If the AI is only allowed to block, not allow, and is part of a layered system that includes traditional safeties, then there is no practical harm in adding AI to a toolset (AI is bad morally, but that's not my point here).

                                          Machine learning has been used to detect IoC's for a while now, I know SentinelOne was announcing that capability around 2019 (the MSP I worked for used them so I got their newsletter).

                                          1/2

                                          cr0w@infosec.exchangeC This user is from outside of this forum
                                          cr0w@infosec.exchangeC This user is from outside of this forum
                                          cr0w@infosec.exchange
                                          wrote sidst redigeret af
                                          #22

                                          @DemonHouser If an AI system inadvertently blocks a critical system in my world, really bad things can happen. And if they do, who is accountable? A human making a human mistake is held accountable. An AI system making a "mistake" is just "lol, whoops, it's still learning" and no one is held accountable.

                                          Also, I dislike how now that modern AI has been proven to be hot garbage people are using traditional ML as a counterpoint. They are not the same despite the overlap in their usage.

                                          1 Reply Last reply
                                          0
                                          Svar
                                          • Svar som emne
                                          Login for at svare
                                          • Ældste til nyeste
                                          • Nyeste til ældste
                                          • Most Votes


                                          • Log ind

                                          • Har du ikke en konto? Tilmeld

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          Graciously hosted by data.coop
                                          • First post
                                            Last post
                                          0
                                          • Hjem
                                          • Seneste
                                          • Etiketter
                                          • Populære
                                          • Verden
                                          • Bruger
                                          • Grupper