[HN Gopher] Security Is a Useless Controls Problem
___________________________________________________________________
Security Is a Useless Controls Problem
Author : noleary
Score : 43 points
Date : 2024-11-11 20:20 UTC (2 hours ago)
(HTM) web link (securityis.substack.com)
(TXT) w3m dump (securityis.substack.com)
| sharkbot wrote:
| I've been thinking about this topic thru the lens of moral
| philosophy lately.
|
| A lot of the "big lists of controls" security approaches
| correspond to duty ethics: following and upholding rules is the
| path to ethical behaviour. IT applies this control, manages
| exceptions, tracks compliance, and enforces adherence. Why? It's
| the rule.
|
| Contrast with consequentialism (the outcome is key) or virtue
| ethics (exercising and aligning with virtuous characteristics),
| where rule following isn't the main focus. I've been part of
| (heck, I've started) lots of debates about the value of some
| arbitrary control that seemed out of touch with reality, but
| framed my perspective on virtues (efficiency, convenience) or
| outcomes (faster launch, lower overhead). That disconnect in
| ethical perspectives made most of those discussions a waste of
| time.
|
| A lot of security debates are specific instances of general
| ethical situations; threat models instead of trolley problems.
| xxpor wrote:
| Securities laws are written in terms of duty ethics ("fiduciary
| duty", "duty of due care", etc). That's all anyone at the top
| would care about.
| jiggawatts wrote:
| I work at medium to large government orgs as a consultant and
| it's entertaining watching beginners coming in from small
| private industries using - as you put it - consequentialism and
| virtue ethics to fight against an enterprise that admits only
| duty ethics: checklists, approvals, and exemptions.
|
| My current favourite one is the mandatory use of Web
| Application Firewalls (WAFs). They're digital snake oil sold to
| organisations that have had "Must use WAF" on their checklists
| for two decades and will never take them off that list.
|
| Most WAF I've seen or deployed are doing _nothing_ other then
| burning money to heat the data centre air because they're
| generally left them in "audit only mode", sending logs to a
| destination accessed by no-one. This is because if a WAF
| _enforces_ its rules it'll break most web apps outright, and
| it's an expensive exercise to tune them... and maintain this
| tuning to avoid 403 errors after every software update or new
| feature. So no-one volunteers for this _responsibility_ which
| would be a virtuous ethical behaviour in an org where that's
| not rewarded.
|
| This means that recently I spun up a tiny web server that costs
| $200/mo with a $500/mo WAF in front of it that _does nothing_
| just so a checkbox can be ticked.
| tryauuum wrote:
| can it even be considered a firewall if it's running in an
| "audit only mode"?
| lll-o-lll wrote:
| So WAF. Bad? I don't know enough about it. If it's just a way
| to inject custom rules that need to be written and
| maintained, the value seems low or negative. I had hoped you
| got a bunch of packages that protected against (or at least
| detected) common classes of attacks. Or at least gave you
| tools in order to react to an attack?
| erulabs wrote:
| Security is having a bit of a hay day as everyone fights to build
| a moat against smart kids and AI. SOC2 and friends are a pain in
| the ass, but are a moat more than most these days. Security
| theater? The answer is at least "mostly", but a moat nonetheless.
| You can _feel_ the power swinging back into the hands of the
| customer.
|
| When all software is trivial, the salesman and the customer will
| reign again. Not that I'm hoping for that day, but that day may
| be coming.
| nshkrdotcom wrote:
| DITE and CSPM are indeed important problems upon which to reflect
| security-wise.
|
| But, reflecting on XSS: What a shame that we can't evolve our
| standards, protocols, software, and hardware to fix such issues
| fundamentally.
| EE84M3i wrote:
| Other than the myriad of problems with passwords that NIST has
| killed in competent circles, what are some other "useless
| controls"?
| convolvatron wrote:
| in the spirit of this article, can anyone explain why the Linux
| host-level firewall is a useful control?
| deathanatos wrote:
| I think it depends a bit on circumstance, but I think I'd
| start with "way too much software binds to 0.0.0.0 by
| default", "way too much software lacks decent authn/z out of
| the box, possibly has _no_ authn /z out of the box", and
| "developers are too lazy to change the defaults".
|
| So it ends up on the network, unprotected.
| dogman144 wrote:
| There's always an edge case, gotta know the various sec
| controls to slice the target risk outcome, vs target outcome
| == specific implementation. Security hires who are
| challenging employees are the latter types.
|
| Edge case and your answer, in spirit - public-facing server,
| can't have a HW firewall in-line, can't do ACLs for some
| reason, can't have EDR on it.... at least put on a Linux
| host-level FW and hope for the best.
| encomiast wrote:
| I bumped into controls mandating security scans, when people
| running the scans don't need to know anything about the
| results. One example prevented us from serving public data
| using Google Web Services because the front-end was still
| offering 3DES among the offered ciphers. This raised alerts
| because of the possibility of Sweet32 vulnerability, which is
| completely impractical to exploit with website scale data sizes
| and short-lived sessions (and modern browsers generally don't
| opt to use 3DES). Still, it was a hard 'no', but nobody could
| explain the risk beyond the risk of non-compliance and the red
| 'severe' on the report.
|
| We also had scans report GPL licenses in our dependencies,
| which for us was a total non-issue, but security dug in, not
| because of legal risk, but compliance with the scans.
| mmsc wrote:
| "Why do we have to do X? Because we have to do X and have
| always had to do X" is a human problem coming from lack of
| expertise and lack of confidence to question authority.
|
| It's a shame, your story isn't unique at all.
| pphysch wrote:
| This is quite a simplification. There are a lot of
| useless/dubious controls out there, but the problem is rather the
| contradiction between security pragmatism and compliance regimes.
|
| ####
|
| Government: I need a service.
|
| Contractor: I can provide that.
|
| Government: Does it comply with NIST 123.456?
|
| Contractor: Well not completely, because control XYZ is
| ackshually useless and doesn't contribute--
|
| Government: _hangs up_
| deathanatos wrote:
| I think it's fine to implement a useless control to get a
| customer.
|
| Just don't pretend that you're doing it because it is a useful
| control, pretend that you're doing it because jumping through
| that hoop gets you that customer, and "we're a smaller fish
| than the government". _Especially_ with the government
| (especially if it 's the USA...) there are going to be utterly
| pointless hoops. I can pragmatically smile & jump, ... but that
| doesn't make it useful.
| JoshTriplett wrote:
| Exactly. There is absolutely a threshold of money that will
| get me to implement FIPS. There is no threshold of money that
| will get me to say it's a good idea that has any value other
| than getting the (singular) customer that demands FIPS.
| danjl wrote:
| The vast majority of the security "industry" is about useless
| compliance, rather than actual security. The chimps have put
| their fears into large enterprise compliance documents. This
| teaches the junior security people at enterprise companies that
| these useless fears are necessary, and they pass them along to
| their friends. Why? Not just because of chimps and fear, but also
| $$. There is a ton of money to be made off of silly chimps.
___________________________________________________________________
(page generated 2024-11-11 23:00 UTC)