I'm going to say something that's been festering in my mind for a while now.
-
I'm going to say something that's been festering in my mind for a while now. In my two decades of practice in information security, I have yet to see responsible disclosure result in measurably better security posture.
Code quality hasn't improved, patch management hasn't improved, minimum viable product hasn't improved, automated security updates, especially for IoT devices... Jesus Fucking Christ haven't improved. The cost of failure for organizations losing your data due to gross negligence has in no way improved, why should responsibility be the domain of the security researcher when nobody else is willing to share in that responsibility?
I'm half-tempted to say if you have 0-days you might as well get paid for them than be responsible. Because even with a tilted playing field, nothing has measurably improved since I've been here and I would argue with "vibe coding" and the tech industry's view of "Let the AI handle it" that software quality is the worst it has been since the 90s. I lived through windows millennium edition. I've seen shit you wouldn't believe.
"Hardware's fucked because we can't buy any, software is fucked because the LLMs trained by reddit and stack overflow are in charge now. You might as well fucking guess at this point."
@da_667 Ok, so, some thoughts, I was uncertain if I should post it as a reply or a standalone post as its more "my own thoughts than a reply" but ...
I srsly dislike the term "responsible disclosure", most cold take, I know - framing all other methods as irresponsible while only creating one-sided responsibility, yadda yadda yadda. This is in addition to the discussion of financial incentives/bug bounties - morality & work often do not combine well in our current economic system.
The responsible thing to do with a detected vulnerability absolutely depends on the vendor (or as a stand-in the industry), on the downstream impacts of the vulnerability, exploitation status, ... - full disclosure can absolutely be the morally right thing to do. Unfortunately, without pressure (be it economic or legal - rip social pressure/shame as a functional tool...) for software & service providers to clean up their shit (be it with actual functional CVD programs or proactively not putting customers/users at risk by actually writing reliable software) there is absolutely no incentive to do so. Repeated painful full disclosures might actually be a positive as it can contribute to such pressures. If you, however, were to drop a RCE in curl on 4chan I would feel the need to slap you
Things get a lot more complicated imo when it comes to using/selling/non-publicly distributing vulnerabilities with impact potential. While I said that morality & work don't combine well this doesn't mean you get a blank check if you do it for paying your bills - amoral & immoral are very much different things. I am young, naive, and privilege-maxxing, but I believe there is a duty to, at least, not make the world worse.
None of this precludes it from being the right thing - hi, the thing that got me into political action (non-digital!) was the Phineas Fisher texts. Ignoring the particulars I still believe the basics hold true: We need to make the world better & hacktivism or distribution of secrets can be a tool in this toolbox. For me, as your local non-ideologically-committed certified left wing extremist this precludes support for states (in most situations) & a universal objections to private sector sales (as you srsly cant control it at that point), but my framework isn't your framework.
Anyway, too many words, tldr: Fuck responsible disclosure, do a case-by-case assessment & please try not to make the world worse, thats all I am asking for

-
I'm going to say something that's been festering in my mind for a while now. In my two decades of practice in information security, I have yet to see responsible disclosure result in measurably better security posture.
Code quality hasn't improved, patch management hasn't improved, minimum viable product hasn't improved, automated security updates, especially for IoT devices... Jesus Fucking Christ haven't improved. The cost of failure for organizations losing your data due to gross negligence has in no way improved, why should responsibility be the domain of the security researcher when nobody else is willing to share in that responsibility?
I'm half-tempted to say if you have 0-days you might as well get paid for them than be responsible. Because even with a tilted playing field, nothing has measurably improved since I've been here and I would argue with "vibe coding" and the tech industry's view of "Let the AI handle it" that software quality is the worst it has been since the 90s. I lived through windows millennium edition. I've seen shit you wouldn't believe.
"Hardware's fucked because we can't buy any, software is fucked because the LLMs trained by reddit and stack overflow are in charge now. You might as well fucking guess at this point."
@da_667
Hard agree! -
I'm going to say something that's been festering in my mind for a while now. In my two decades of practice in information security, I have yet to see responsible disclosure result in measurably better security posture.
Code quality hasn't improved, patch management hasn't improved, minimum viable product hasn't improved, automated security updates, especially for IoT devices... Jesus Fucking Christ haven't improved. The cost of failure for organizations losing your data due to gross negligence has in no way improved, why should responsibility be the domain of the security researcher when nobody else is willing to share in that responsibility?
I'm half-tempted to say if you have 0-days you might as well get paid for them than be responsible. Because even with a tilted playing field, nothing has measurably improved since I've been here and I would argue with "vibe coding" and the tech industry's view of "Let the AI handle it" that software quality is the worst it has been since the 90s. I lived through windows millennium edition. I've seen shit you wouldn't believe.
"Hardware's fucked because we can't buy any, software is fucked because the LLMs trained by reddit and stack overflow are in charge now. You might as well fucking guess at this point."
@da_667 Welcome to the club!
Yes, the "responsible" disclosure was designed to push as much responsibility to whoever finds The Bug and absolve everyone else. It is an emotionally-charged term, and I think purposefully so. You are supposed to feel bad about *not* doing it or doing it in a way The Company disagrees with. I mean, think of the children^W^W^Wusers! And then when you, in your silliness, try to do the supposedly right thing, and get a legal threat back -- well, folks, that ain't kind of the responsibility I remember ever taking upon myself. If I get threats and violence for doing supposedly good, I ain't doing good no more, sorry. Not interested. Maybe someone else will, I don't care. So I say we treat vulnerability disclosure as proper journalism, according to Orwell: "Journalism is printing what someone else does not want published; everything else is public relations."
Yes, the select few have made a fortune on bug bounties or whatever, but the vast majority gets breadcrumbs and the feeling of Doing The Right Thing. That feeling is where they got us. Taking responsibility for someone else's fuck-ups and feeling guilty for not being responsible enough, that's so weird, man. I didn't put the bugs in there, you did, dear company, by hiring the cheapest contractors to do the job and firing the one person who actually cared. We all know how it goes. After all, nothing a company does is in the interest of the end user or anybody else but the company itself and/or the shareholders.
So yeah, got a 0-day? To full disclosure, or sell it off if that's your thing. At least remember you got a choice here.
Sorry for a bunch of words, the topic hits rather close here too.
-
nobody is held liable when breaches occur and your PII gets stolen for the fifth time in a single year.
And then we read the inevitable report that it was a third-party managed system that was 6 months behind in patches that got popped. Or it was a risk assessment result that they said "they would get to that eventually" and never did.
You start throwing executives in cuffs for failing to do their duty and sure as shit things would start changing.
@da_667 when I was in consulting there were a few times we uncovered huge security problems in systems. We'd get very serious, tell the client, say something doom-laden like 'this is an existential risk to your business'. No-one really believed us and, honestly, they were right not to. There's genuinely no accountability for any of this.
-
It has always been the privilege of the corporations and the rich to define what responsibility is. I'm here to tell you don't give them what they aren't willing to give us.
@da_667 As a programmer, I've seen the result of the same degradation if only from a different angle. It's super-frustrating and even before LLM code generation things weren't going well.
Nobody wants to be careful because being careful cuts into margins.
I'm glad you are putting to words something I am feeling.
-
It has always been the privilege of the corporations and the rich to define what responsibility is. I'm here to tell you don't give them what they aren't willing to give us.
@da_667 I started doing computer support professionally in 1985. By the end of the dot-com era in the early 2000's, I had long burned out on fighting the same battles endlessly in corporate IT. Things were never going to get better for the reasons you cite--basically coming down to a lack of real consequences for doing a bad job.
In addition, there are now entire industries that have grown up around offering "solutions" for how broken these practices and products are. And also industries around handling the blast effects from the latest successful intrusions. You can buy "cyber insurance" to give the appearance of managing your corporate risk. InfoSec has become "too big to fail".
After thinking about this long and hard, I ended up going into the incident response business. If security breaches are inevitable, IR services will always be in demand. I get paid better and get more respect from customers than I ever did trying to do things right the first time. I don't kid myself that our remediation strategies are likely to make a long-term difference in most organizations' security postures, but sometimes there's a win.
-
@da_667 Welcome to the club!
Yes, the "responsible" disclosure was designed to push as much responsibility to whoever finds The Bug and absolve everyone else. It is an emotionally-charged term, and I think purposefully so. You are supposed to feel bad about *not* doing it or doing it in a way The Company disagrees with. I mean, think of the children^W^W^Wusers! And then when you, in your silliness, try to do the supposedly right thing, and get a legal threat back -- well, folks, that ain't kind of the responsibility I remember ever taking upon myself. If I get threats and violence for doing supposedly good, I ain't doing good no more, sorry. Not interested. Maybe someone else will, I don't care. So I say we treat vulnerability disclosure as proper journalism, according to Orwell: "Journalism is printing what someone else does not want published; everything else is public relations."
Yes, the select few have made a fortune on bug bounties or whatever, but the vast majority gets breadcrumbs and the feeling of Doing The Right Thing. That feeling is where they got us. Taking responsibility for someone else's fuck-ups and feeling guilty for not being responsible enough, that's so weird, man. I didn't put the bugs in there, you did, dear company, by hiring the cheapest contractors to do the job and firing the one person who actually cared. We all know how it goes. After all, nothing a company does is in the interest of the end user or anybody else but the company itself and/or the shareholders.
So yeah, got a 0-day? To full disclosure, or sell it off if that's your thing. At least remember you got a choice here.
Sorry for a bunch of words, the topic hits rather close here too.
@infosecdj @da_667 the government buys them so sell them for the most responsible price you can get #cut the shit in half
-
I'm going to say something that's been festering in my mind for a while now. In my two decades of practice in information security, I have yet to see responsible disclosure result in measurably better security posture.
Code quality hasn't improved, patch management hasn't improved, minimum viable product hasn't improved, automated security updates, especially for IoT devices... Jesus Fucking Christ haven't improved. The cost of failure for organizations losing your data due to gross negligence has in no way improved, why should responsibility be the domain of the security researcher when nobody else is willing to share in that responsibility?
I'm half-tempted to say if you have 0-days you might as well get paid for them than be responsible. Because even with a tilted playing field, nothing has measurably improved since I've been here and I would argue with "vibe coding" and the tech industry's view of "Let the AI handle it" that software quality is the worst it has been since the 90s. I lived through windows millennium edition. I've seen shit you wouldn't believe.
"Hardware's fucked because we can't buy any, software is fucked because the LLMs trained by reddit and stack overflow are in charge now. You might as well fucking guess at this point."
@da_667 I’ve been at improvements for decades inside companies. Automated scans in pipelines, and IDE tools make things a lot better for those that care. I’ve worked with many developers that take pride in their work, and just need guidance. Reported vulnerabilities motivate many internally to improve not just the one problem, but the system involved.
It still takes time to change (though far less now). Sprints are measured in weeks, and work needs justification (like a report). -
@da_667 I started doing computer support professionally in 1985. By the end of the dot-com era in the early 2000's, I had long burned out on fighting the same battles endlessly in corporate IT. Things were never going to get better for the reasons you cite--basically coming down to a lack of real consequences for doing a bad job.
In addition, there are now entire industries that have grown up around offering "solutions" for how broken these practices and products are. And also industries around handling the blast effects from the latest successful intrusions. You can buy "cyber insurance" to give the appearance of managing your corporate risk. InfoSec has become "too big to fail".
After thinking about this long and hard, I ended up going into the incident response business. If security breaches are inevitable, IR services will always be in demand. I get paid better and get more respect from customers than I ever did trying to do things right the first time. I don't kid myself that our remediation strategies are likely to make a long-term difference in most organizations' security postures, but sometimes there's a win.
@hal_pomeranz Dealing with people who've been burned and are willing to learn from their mistakes: priceless.
@da_667 -
@da_667 I started doing computer support professionally in 1985. By the end of the dot-com era in the early 2000's, I had long burned out on fighting the same battles endlessly in corporate IT. Things were never going to get better for the reasons you cite--basically coming down to a lack of real consequences for doing a bad job.
In addition, there are now entire industries that have grown up around offering "solutions" for how broken these practices and products are. And also industries around handling the blast effects from the latest successful intrusions. You can buy "cyber insurance" to give the appearance of managing your corporate risk. InfoSec has become "too big to fail".
After thinking about this long and hard, I ended up going into the incident response business. If security breaches are inevitable, IR services will always be in demand. I get paid better and get more respect from customers than I ever did trying to do things right the first time. I don't kid myself that our remediation strategies are likely to make a long-term difference in most organizations' security postures, but sometimes there's a win.
@hal_pomeranz @da_667 some people are against ai but most of their customers use it - are you supposed to make them do it the right way first all over again? #corp culture #drawn and quartered
-
nobody is held liable when breaches occur and your PII gets stolen for the fifth time in a single year.
And then we read the inevitable report that it was a third-party managed system that was 6 months behind in patches that got popped. Or it was a risk assessment result that they said "they would get to that eventually" and never did.
You start throwing executives in cuffs for failing to do their duty and sure as shit things would start changing.
@da_667 I don't particularly want them in cuffs for failing to patch because it just strengthens the paternalistic forced patching bs.
I want them in cuffs for possession of PII we never consented for them to collect or store in the first place.
-
@da_667 I’ve been at improvements for decades inside companies. Automated scans in pipelines, and IDE tools make things a lot better for those that care. I’ve worked with many developers that take pride in their work, and just need guidance. Reported vulnerabilities motivate many internally to improve not just the one problem, but the system involved.
It still takes time to change (though far less now). Sprints are measured in weeks, and work needs justification (like a report).@drewdaniels @da_667 why do these all seem to happen after the fact? Why (especially if they are known) is it not mandatory to have them in place prior to developing/building any new products/facilities?
-
@da_667 isn't the whole fucking reason credit scores even exist to circumvent anti-discrimination laws and rules by using arbitrary numbers that just so happen to "correlate well" with race?
-
nobody is held liable when breaches occur and your PII gets stolen for the fifth time in a single year.
And then we read the inevitable report that it was a third-party managed system that was 6 months behind in patches that got popped. Or it was a risk assessment result that they said "they would get to that eventually" and never did.
You start throwing executives in cuffs for failing to do their duty and sure as shit things would start changing.
@da_667 Twice this year I received a stack of letters to every member of my family from the medical practice my family uses. The first was from a data breach and the second a different ransomware attack.
-
@drewdaniels @da_667 why do these all seem to happen after the fact? Why (especially if they are known) is it not mandatory to have them in place prior to developing/building any new products/facilities?
@Rickd6 @da_667 many of them do. Justification helps. Organizations have many people.
You’re right that it’s not perfect, and I could tell many stories.
There are many people trying to do the right things. Most individuals take pride in their work.
I hear you that things are getting worse, and it’s hard to see any improvements. -
@da_667 I don't particularly want them in cuffs for failing to patch because it just strengthens the paternalistic forced patching bs.
I want them in cuffs for possession of PII we never consented for them to collect or store in the first place.
@dalias I absolutely want executives in cuffs for failing to secure data that I have no choice but to trust to them, that is mostly immutable. They get paid ridiculous sums of money for the job, but there are zero consequences for that failure. and if that means an executive gets jail time for failing to patch a box, I would welcome it. At the same time, I would absolutely welcome them getting imprisoned for the collection of PII, especially biometric data that they, historically never needed
When I acquired my credit card in the early 2000s, I never once needed to take a picture of my license, or take a picture of myself for some credit card company to verify my identity. They tell you that the data isn't stored, but if it isn't, then why did they need it in the first place?
-
@dalias I absolutely want executives in cuffs for failing to secure data that I have no choice but to trust to them, that is mostly immutable. They get paid ridiculous sums of money for the job, but there are zero consequences for that failure. and if that means an executive gets jail time for failing to patch a box, I would welcome it. At the same time, I would absolutely welcome them getting imprisoned for the collection of PII, especially biometric data that they, historically never needed
When I acquired my credit card in the early 2000s, I never once needed to take a picture of my license, or take a picture of myself for some credit card company to verify my identity. They tell you that the data isn't stored, but if it isn't, then why did they need it in the first place?
@da_667 "..that I have no choice but to trust to them.."
This is exactly what "never consented" means.
-
@da_667 "..that I have no choice but to trust to them.."
This is exactly what "never consented" means.
@dalias really?
-
@dalias really?
@da_667 Yes. Something handed over under duress is not consented.
-
I'm going to say something that's been festering in my mind for a while now. In my two decades of practice in information security, I have yet to see responsible disclosure result in measurably better security posture.
Code quality hasn't improved, patch management hasn't improved, minimum viable product hasn't improved, automated security updates, especially for IoT devices... Jesus Fucking Christ haven't improved. The cost of failure for organizations losing your data due to gross negligence has in no way improved, why should responsibility be the domain of the security researcher when nobody else is willing to share in that responsibility?
I'm half-tempted to say if you have 0-days you might as well get paid for them than be responsible. Because even with a tilted playing field, nothing has measurably improved since I've been here and I would argue with "vibe coding" and the tech industry's view of "Let the AI handle it" that software quality is the worst it has been since the 90s. I lived through windows millennium edition. I've seen shit you wouldn't believe.
"Hardware's fucked because we can't buy any, software is fucked because the LLMs trained by reddit and stack overflow are in charge now. You might as well fucking guess at this point."
@da_667 I believe that if we could make the C-suite personally liable AND enforce it. Many problems will sort themselves out
-
J jwcph@helvede.net shared this topic