This post did not contain any content.
-
@cR0w I actually disagree on this one, with a caveat.
If the AI is only allowed to block, not allow, and is part of a layered system that includes traditional safeties, then there is no practical harm in adding AI to a toolset (AI is bad morally, but that's not my point here).
Machine learning has been used to detect IoC's for a while now, I know SentinelOne was announcing that capability around 2019 (the MSP I worked for used them so I got their newsletter).
1/2
@cR0w this also doesn't consider user feelings, because false positives are definitely more likely if using an AI or machine learning element, but I tend to err on the side of false positives fine, false negatives bad no matter the impact.
Again, this is not apologetics for how garbage and damaging AI companies are, because they are very much both of those things, but from a pure performance and security standpoint, structured, layered use of AI to detect and block intrusions can work fine.
-
@cR0w I actually disagree on this one, with a caveat.
If the AI is only allowed to block, not allow, and is part of a layered system that includes traditional safeties, then there is no practical harm in adding AI to a toolset (AI is bad morally, but that's not my point here).
Machine learning has been used to detect IoC's for a while now, I know SentinelOne was announcing that capability around 2019 (the MSP I worked for used them so I got their newsletter).
1/2
@DemonHouser If an AI system inadvertently blocks a critical system in my world, really bad things can happen. And if they do, who is accountable? A human making a human mistake is held accountable. An AI system making a "mistake" is just "lol, whoops, it's still learning" and no one is held accountable.
Also, I dislike how now that modern AI has been proven to be hot garbage people are using traditional ML as a counterpoint. They are not the same despite the overlap in their usage.
-
@beyondmachines1 AI tools used by attackers has not materially impacted capabilities beyond scope and scale, but that does not change the likelihood of occurrence or the severity of impact to orgs who were already modeling their risk based on the state of the art threats, which should be everyone at this point. Defenders relying on nondeterministic and unaccountable systems are inevitable going to miss things due to the way existing AI tools work.
@cR0w your argument assumes full discipline and coverage of the risk assessment.
Wildly optimistic, given that most breaches still boil down to basics like credentials, human factor and misconfigurations.
No horse in the AI race. Just saying the reality is far from "should be everyone at this point"
-
J jwcph@helvede.net shared this topic