Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. This post did not contain any content.

This post did not contain any content.

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
23 Indlæg 12 Posters 0 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • demonhouser@kind.socialD demonhouser@kind.social

    @cR0w I actually disagree on this one, with a caveat.

    If the AI is only allowed to block, not allow, and is part of a layered system that includes traditional safeties, then there is no practical harm in adding AI to a toolset (AI is bad morally, but that's not my point here).

    Machine learning has been used to detect IoC's for a while now, I know SentinelOne was announcing that capability around 2019 (the MSP I worked for used them so I got their newsletter).

    1/2

    demonhouser@kind.socialD This user is from outside of this forum
    demonhouser@kind.socialD This user is from outside of this forum
    demonhouser@kind.social
    wrote sidst redigeret af
    #21

    @cR0w this also doesn't consider user feelings, because false positives are definitely more likely if using an AI or machine learning element, but I tend to err on the side of false positives fine, false negatives bad no matter the impact.

    Again, this is not apologetics for how garbage and damaging AI companies are, because they are very much both of those things, but from a pure performance and security standpoint, structured, layered use of AI to detect and block intrusions can work fine.

    1 Reply Last reply
    0
    • demonhouser@kind.socialD demonhouser@kind.social

      @cR0w I actually disagree on this one, with a caveat.

      If the AI is only allowed to block, not allow, and is part of a layered system that includes traditional safeties, then there is no practical harm in adding AI to a toolset (AI is bad morally, but that's not my point here).

      Machine learning has been used to detect IoC's for a while now, I know SentinelOne was announcing that capability around 2019 (the MSP I worked for used them so I got their newsletter).

      1/2

      cr0w@infosec.exchangeC This user is from outside of this forum
      cr0w@infosec.exchangeC This user is from outside of this forum
      cr0w@infosec.exchange
      wrote sidst redigeret af
      #22

      @DemonHouser If an AI system inadvertently blocks a critical system in my world, really bad things can happen. And if they do, who is accountable? A human making a human mistake is held accountable. An AI system making a "mistake" is just "lol, whoops, it's still learning" and no one is held accountable.

      Also, I dislike how now that modern AI has been proven to be hot garbage people are using traditional ML as a counterpoint. They are not the same despite the overlap in their usage.

      1 Reply Last reply
      0
      • cr0w@infosec.exchangeC cr0w@infosec.exchange

        @beyondmachines1 AI tools used by attackers has not materially impacted capabilities beyond scope and scale, but that does not change the likelihood of occurrence or the severity of impact to orgs who were already modeling their risk based on the state of the art threats, which should be everyone at this point. Defenders relying on nondeterministic and unaccountable systems are inevitable going to miss things due to the way existing AI tools work.

        beyondmachines1@infosec.exchangeB This user is from outside of this forum
        beyondmachines1@infosec.exchangeB This user is from outside of this forum
        beyondmachines1@infosec.exchange
        wrote sidst redigeret af
        #23

        @cR0w your argument assumes full discipline and coverage of the risk assessment.

        Wildly optimistic, given that most breaches still boil down to basics like credentials, human factor and misconfigurations.

        No horse in the AI race. Just saying the reality is far from "should be everyone at this point"

        1 Reply Last reply
        0
        • jwcph@helvede.netJ jwcph@helvede.net shared this topic
        Svar
        • Svar som emne
        Login for at svare
        • Ældste til nyeste
        • Nyeste til ældste
        • Most Votes


        • Log ind

        • Har du ikke en konto? Tilmeld

        • Login or register to search.
        Powered by NodeBB Contributors
        Graciously hosted by data.coop
        • First post
          Last post
        0
        • Hjem
        • Seneste
        • Etiketter
        • Populære
        • Verden
        • Bruger
        • Grupper