[HN Gopher] Insurers run from ransomware cover as losses mount
       ___________________________________________________________________
        
       Insurers run from ransomware cover as losses mount
        
       Author : arkadiyt
       Score  : 133 points
       Date   : 2021-11-19 16:39 UTC (1 days ago)
        
 (HTM) web link (www.reuters.com)
 (TXT) w3m dump (www.reuters.com)
        
       | aejnsn wrote:
       | The insurers should require policyholders to go through training
       | or an audit process before writing said coverage. God forbid an
       | insurance company and policyholder share the interest in safety.
        
         | notesinthefield wrote:
         | They often do! I dont know a single colleague who hasnt had at
         | least two infosec policy driven projects on their plate this
         | year.
        
       | dorianmariefr wrote:
       | Isn't it like natural selection but for tech skills?
        
         | make3 wrote:
         | victim blaming
        
         | lucb1e wrote:
         | I can see the argument, but we didn't like it for humans
         | either.
        
       | CharlieMunger wrote:
       | Buffett's Berkshire Hathaway has a large insurance conglomerate
       | (GenRE, National Indemnity, GEICO, Guard, etc.) and a policy of:
       | 
       | "No CNBC coverage."
       | 
       | CNBC stands for Chemical Nuclear Biological Cyber. This policy
       | has been in effect for twenty years, thanks to Buffett's
       | foresight.
       | 
       | If you want to learn more about Berkshire, join us on Reddit:
       | 
       | https://old.reddit.com/r/brkb/
        
         | elliekelly wrote:
         | Did you know you can comment on HN _without_ linking to that
         | subreddit?
        
           | jonwachob91 wrote:
           | an hour or so ago there was another comment on this article
           | by user 'HenryKissinger' advocating for assassinating hackers
           | and declaring war on nation states that protect hackers.
           | 
           | The comment has since been deleted, but it feels like a bunch
           | of troll accounts making statements to stir up the comments.
           | :(
        
       | mcot2 wrote:
       | I think we look at this all wrong. Cybersecurity is not only a
       | technology problem but a human problem. We need to spend a lot
       | more resources tracking down the sources of these attacks from a
       | human perspective and treating it like a military/diplomatic
       | issue if stonewalled by foreign governments.
       | 
       | Domestically we need to really increase the punishments for these
       | and other computer crimes and step up enforcement significantly.
        
         | throwaway1777 wrote:
         | We need politicians who understand computers at all first or
         | the laws are going to make zero sense like the current ones.
        
       | waihtis wrote:
       | I thought there was a blooming cyber risk scoring industry
       | insurers could tap into?
       | 
       | Wait, are you saying pulling and scoring Shodan data doesn't give
       | an accurate idea of the vulnerability of your internal
       | infrastructure? ;-)
        
         | [deleted]
        
       | babyshake wrote:
       | I would imagine insurance fraud would be a big problem when it
       | comes to anonymous cryptocurrency based ransomware coverage.
        
       | VHRanger wrote:
       | Thanks, crypto!
        
         | chiph wrote:
         | TBF, ransomware existed before crypto. Crypto just made it
         | easier for the criminals to get paid, and is somewhat
         | anonymous.
        
           | vmception wrote:
           | But these ransomware operators are so comfortable in their
           | country that they don't even bother trying to leverage the
           | anonymity and just simply like the convenience of crypto
        
             | Oddskar wrote:
             | I think you underestimate the influence corporations exert
             | on governments.
        
               | vmception wrote:
               | oh not at all, just reinforcing how little the crypto
               | asset existence matters in creating this reality. they
               | would be perfectly fine with using Western Union as they
               | don't care about the investigation. crypto does reduce
               | friction though in acquiring and sending arbitrary
               | amounts, but people still just can't imagine that
               | recipients simply _want_ crypto for the sake of having
               | crypto instead of their local currency.
        
               | Oddskar wrote:
               | I don't think you can transfer millions of euros via
               | Western Union and keep it anonymous. No way.
        
               | vmception wrote:
               | You're missing that nobody is caring about anonymity
               | here, that's what I've been saying the other two posts
               | 
               | The major ransomware operators are not caring about
               | anonymity and are not using that aspect from any payment
               | method they allow including crypto
               | 
               | There is simply no cooperation with local law enforcement
               | 
               | What were you referring to regarding corporations
               | exerting power on governments?
        
               | ManuelKiessling wrote:
               | Well, instead of repeating it again and again, how about
               | just stating it once, but this time with something like,
               | you know, a proof? Or at least a source for the claim?
        
           | goatsi wrote:
           | Ransomware before crypto was locking down individual desktops
           | until they went to a gas station to buy $500 in Ukash or
           | paysafecard vouchers. The ability to demand millions at once
           | (and to actually receive it) is a huge change that has
           | massively expanded the industry.
        
       | miohtama wrote:
       | > Insurers say some attackers may even check whether potential
       | victims have policies that would make them more likely to pay
       | out.
       | 
       | > Dickson said one technology client had previously bought 130
       | million pounds of professional indemnity and cyber cover for
       | 250,000 pounds. Now the client could only get 55 million pounds
       | of cover and the price was 500,000 pounds.
       | 
       | As just relying on insurance payments is not sustainable, I hope
       | we soon get to the point where companies that do not properly
       | invest in cyber security get wiped out and ones that take matters
       | seriously survive and increase their market share and profit
       | margins. The same holes will be exploited by nationstate and
       | identity fraud hackers, in this case it is just the consumer who
       | ends up paying the bill.
        
         | dan-robertson wrote:
         | I think this does not acknowledge how incredibly asymmetric the
         | difficulty is on each side. It is expensive and difficult to
         | protect against security breaches, even with specialised
         | software and expensive dedicated cybersecurity teams.
         | Meanwhile, top-notch offensive capabilities might set you back
         | up to seven figures which is easily achievable by unfriendly
         | regimes (eg Russia, Iran, China, North Korea) or just criminals
         | in places where they have little interference from authorities
         | (eg most ransomware).
         | 
         | One small fuckup is sufficient to have the whole thing
         | compromised (though cybersecurity people do live to talk about
         | defence in depth) and security doesn't compose well. That is,
         | security is a property of the system as a whole and security of
         | individual parts needn't imply security of the whole.
         | 
         | I think analogies to physical security are bad because when you
         | think of physical security you don't imagine stealthy strangers
         | constantly trying to pick your locks or open your windows or
         | even tailgate people into the office at all times of day or
         | night. But once one has a tool to breach a common system, it
         | can be quickly tried on many systems at little marginal cost.
         | When the marginal cost and risk are low, you may see more
         | attacks. The real difference these days is that
         | cryptocurrencies have made the potential payout high and so
         | there are a lot of incentives to carrying these attacks.
        
           | dcow wrote:
           | Engineers aren't allowed "one small fuckup" when building
           | bridges and buildings. Doctors aren't allowed "one small
           | fuckup" in the OR or when calculating dosages. These mistakes
           | are grave and should be costly. Pilots aren't allowed small
           | fuckups when flying planes.
           | 
           | The problem is that companies _don't want_ to invest the time
           | in good security because it's less profitable. Or, they are
           | old and decrepit. If the company runs critical
           | infrastructure, then peoples' safety is at risk. If they
           | don't then surely a new fad can take their place. Why should
           | we tolerate small fuckups and negligence that only serves a
           | company 's shareholders' interests?
           | 
           | I do know places that don't allow random individuals to tail
           | you into a building and have cameras on entryways so you
           | can't sit there and attempt to pick the lock endlessly. And
           | they have good locks. Just because it's harder to "see"
           | digital systems doesn't make them any less relevant to
           | secure. People in software more-so on average just don't
           | understand how the internet works so they don't know where to
           | put the cameras and how to identify the shady individual
           | trying to tail someone through an entryway that requires
           | authorization.
           | 
           | This problem is fixable but not when the prime directive is
           | "ship this experimental product as fast as you possibly can
           | or I'll hire an offshore contractor". And not when your
           | extent of experience with operating software is running a
           | cute web server on your dev laptop. People are actually doing
           | really dumb shit. Don't pretend that everyone is writing good
           | software and following best practices and they just happen to
           | get hacked by a script kiddie "oh no".
           | 
           | These are sophisticated actors targeting mature but
           | vulnerable companies that have chosen to selfishly forgo
           | healthy security practices. Again, why do we want these
           | machines participating in society? Why don't we want them
           | replaced with more secure versions?
        
             | mcot2 wrote:
             | Bad anology because the system is broken all the way down.
             | If what you are saying is true a bridge engineer wouldn't
             | even be able to build a bridge because their are no
             | materials in the world to build a safe bridge out of.
        
             | Griffinsauce wrote:
             | > Doctors aren't allowed "one small fuckup" in the OR or
             | when calculating dosages.
             | 
             | Yes they are. You should talk to an actual doctor before
             | making such claims.
        
             | [deleted]
        
             | ipaddr wrote:
             | Doctors are allowed many small fuckups and some major.
             | Docotors have insurance and medical associations with the
             | best lawyers.
        
             | dan-robertson wrote:
             | I think your first paragraph is bogus for two reasons,
             | which I'll leave at the end[1] because I'd rather respond
             | to you main point than the silly reactionary thing you
             | wrote at the top.
             | 
             | I feel like you aren't really responding to the claim that
             | having good security is extremely hard and expensive and
             | that it seems bad for every company to have to spend a lot
             | of money on a problem which is likely far from their core
             | competency.
             | 
             | The thing you're actually asking for is for a lot of
             | businesses to go bankrupt trying to be _perfect_ at
             | something that is not their traditional domain and then you
             | make some claim that they have the audacity to try to
             | create value for their shareholders instead? Isn't it
             | outrageous that companies might try to do literally the
             | thing companies are meant to do instead of being experts in
             | computer security.
             | 
             | I suppose you would counter that it is easy or cheap or
             | imperative for them to be secure but, in the world where
             | hacking is relatively easy, cheap, and profitable ($1-10mm
             | sounds like a lot to a person but not to a country or
             | reasonably sized business) and there are myriad ways to
             | make trivial-seeming configuration errors in software that
             | leave you open to attack, I think it is neither easy nor
             | cheap. And frankly I think it is not imperative for
             | businesses either because their purpose is to make money
             | and while cybersecurity is an important risk to worry
             | about, they can only invest in it so long as they are
             | making money from the thing they are actually meant to be
             | good at.
             | 
             | [1] regarding the first graf:
             | 
             | 1. Mostly small errors by engineers or doctors or pilots
             | are... small errors. Bridges won't collapse if the concrete
             | was mixed slightly too wet, and bigger cracks are usually
             | noticed and mitigated in good time. Doctors get dosages
             | wrong all the time (which is why there are attempts to
             | train nurses to question everything the doctors say) and
             | while malpractice is definitely a thing, there are lots of
             | cases where things go wrong at a small scale and are either
             | fixed or unnoticed. If you ever ask a doctor about this in
             | a social setting they will probably have plenty of
             | examples. And if you get something wrong in a plane you are
             | unlikely to immediately crash. Even landings can be aborted
             | quite late. Big planes crash very infrequently and I think
             | it is silly to therefore assume that the pilots are
             | generally infallible. So I claim that your direct statement
             | that these are not allowed is false.
             | 
             | 2. I also claim the analogy is bad because the examples you
             | give are not adversarial. The patients are generally not
             | trying to trick their doctor into making the wrong
             | prescription and Athena geography isn't going to shift and
             | change to try to exploit weaknesses in the bridge built
             | across it. The pilot flies high above the ground with
             | separation from other planes and various safety systems.
             | Even in an unlikely event like an engine stall, there are
             | possibilities to recover. In computer security the threats
             | are fast and numerous so any small error can be quickly
             | magnified.
        
             | unclebucknasty wrote:
             | > _Engineers aren't allowed "one small fuckup" when
             | building bridges and buildings. Doctors aren't allowed "one
             | small fuckup" in the OR_
             | 
             | These are really poor analogies. A better analogy would be
             | engineers who then had to protect those bridges from attack
             | via a number of different modes, including drone and cruise
             | missle strikes; and seemingly legitimate travelers on the
             | bridge who are actually malicious.
             | 
             | Likewise, a better analogy would be doctors who had to
             | contend with nurses who are bad actors, substituting saline
             | for needed drugs or deliberately administering higher than
             | prescribed dosages, etc.
        
             | indymike wrote:
             | > The problem is that companies don't want to invest the
             | time in good security because it's less profitable. Or,
             | they are old and decrepit. If the company runs critical
             | infrastructure, then peoples' safety is at risk.
             | 
             | Most of the time, it's that a non-technical decision maker
             | decides to take a badly mis-calculated risk. "Bob in sales
             | can't access the files he needs." "IT give him admin
             | rights! Now! no... I don't even understand or care to
             | understand that access control mumbo-jumbo you are talking
             | about." A week later, "Hey, you know we just got
             | bitlockered. Bob's laptop got virused, and because we gave
             | him admin rights his machine encrypted all the things AND
             | the backup server too."
        
             | seized wrote:
             | As someone in the healthcare vendor side of things....
             | Doctors make a lot of mistakes. There is plenty of
             | software, processes, etc around trying to catch and
             | mitigate those mistakes.
        
             | rectang wrote:
             | Doctors don't get blamed when someone they treat
             | subsequently gets shot. Civil engineers don't get blamed
             | when a saboteur blows up the bridge they designed.
        
               | ajkjk wrote:
               | This is not a good analogy. Civil engineers would be
               | blamed if a force they were supposed to compensate for,
               | like a storm, destroys a bridge they designed.
        
               | Spivak wrote:
               | The grey area is if attackers built a weather machine to
               | create exactly the perfect storm tailored to stress a
               | small design defect until it fails. Even more so when the
               | probability of such a storm occurring naturally is 0.
        
               | Gwarzo wrote:
               | That's a great analogy.
        
               | dcow wrote:
               | Have you ever driven under the base of a large bridge
               | (you can't it's restricted)? I'm sure you've seen the
               | road spikes and the anti car bomb barricades put around
               | high profile buildings... when something is valuable and
               | a possible target of attack you protect it and make it
               | less vulnerable.
        
             | warkdarrior wrote:
             | Engineers design for resilience to physical failures, whose
             | types are known and can be estimated, predicted, and
             | modeled. Cybersecurity has to deal with human attackers,
             | which cannot be modeled.
        
               | syshum wrote:
               | >>Cybersecurity has to deal with human attackers, which
               | cannot be modeled.
               | 
               | It absolutely can, it is called Zero Trust, as has been a
               | cyber security model for many many years now
               | 
               | The problem is most organizations have their entire
               | business workflow around the Ring Fence model, and
               | organizations is extremely prone to "we have always done
               | it that way" when it comes to business processes so they
               | expect IT to buy some software, or some widget to bolt on
               | to the ring fence that will protect them
               | 
               | Businesses need to completely rework their internal
               | methodologies, and processes which is something most
               | companies refuse to do
        
               | staticassertion wrote:
               | We model human attackers all the time, it's probably one
               | of the core components of the role.
        
               | throw10920 wrote:
               | Human behavior is absolutely modelable - usefully, even
               | ("all models are wrong but some are useful") - and people
               | in security do it _all the time_.
               | 
               | Moreover, engineers _also_ design for resilience to human
               | "attackers" in physical systems all the time.
               | 
               | If anything, cybersecurity is easier because you can
               | leverage the work of _tens of thousands of other people_
               | in your tooling. You wanna re-write ssh or fail2ban
               | yourself?
        
             | Apes wrote:
             | "Engineers aren't allowed "one small fuckup" when building
             | bridges and buildings. Doctors aren't allowed "one small
             | fuckup" in the OR or when calculating dosages. These
             | mistakes are grave and should be costly. Pilots aren't
             | allowed small fuckups when flying planes."
             | 
             | What a beautiful dream.
        
               | dcow wrote:
               | I'm not saying it doesn't happen. I'm saying when it does
               | there are grave consequences. Malpractice insurance is a
               | thing and a similar analog to ransomeware insurance.
        
             | registeredcorn wrote:
             | > Pilots aren't allowed small f*ups when flying planes.
             | 
             | I don't blame you if you don't take a strangers word on
             | this, but I would strongly encourage you to look at this
             | YouTube channel VASAviation
             | (https://www.youtube.com/user/victor981994) or watch the
             | excellent movie, "Charlie Victor Romeo", which cover in
             | extreme detail the routine, regular, and on-going mistakes
             | that are made in aviation. Mistakes happen in aviation ALL
             | THE TIME; they just don't cause death or crashes in every
             | event.
             | 
             | I can't speak for Engineers and Doctors, but for: pilots,
             | ground crews, flight mechanics, and ATC I have a tiny
             | soapbox. Each of these jobs routinely _on a daily basis_
             | have mistakes that are made. Sometimes the errors are
             | minor. Other times they are major, and aren 't discovered
             | for months or years after the fact, which either led to the
             | loss of millions of dollars worth of equipment, or worse.
             | 
             | Sure, CEOs and such are usually held responsible for those
             | mistakes, but the aviation community focuses more on the
             | ideals of, "Every rule, written in blood" mentality. Things
             | are permitted, until they aren't. This is precisely because
             | it caused a disaster. Things are safe(r) now because they
             | were significantly less safe just a few years ago. The
             | aviation community is focused on preventative actions _as a
             | result of_ the ongoing looming threat of failures. There is
             | no way to avoid them entirely. There is an understanding
             | that when you are in the air, there is NEVER certainty. You
             | are NEVER safe. You should _constantly_ be concerned about
             | what can (and will) eventually go wrong. It 's only a
             | matter of how bad the failure will be.
             | 
             | TLDR
             | 
             | Aviation isn't safe because they "don't allow mistakes" but
             | because they have a rigid and foundational system around
             | how to _deal with_ failure.
        
             | [deleted]
        
             | 908B64B197 wrote:
             | > Engineers aren't allowed "one small fuckup" when building
             | bridges and buildings. Doctors aren't allowed "one small
             | fuckup" in the OR or when calculating dosages. These
             | mistakes are grave and should be costly. Pilots aren't
             | allowed small fuckups when flying planes.
             | 
             | That's wrong.
             | 
             | Engineers aren't allowed to fail. They can have as many
             | "fuckups" along the way, as long as they manage to
             | compensate for it and maintain safety it's not a failure.
             | Doctors... well, this profession has a group mentality of
             | shielding one another from publicly acknowledging and
             | documenting mistakes.
             | 
             | Truth is ransomware it utterly useless against a simple
             | back-up and recovery scenario. Any competent engineers can
             | and will deliver one if paid to. It's that companies don't
             | see the point and don't care. As long as the insurance
             | premium is less than what it will cost to implement such a
             | thing they will keep getting ransomed.
        
             | yibg wrote:
             | Yet doctors have insurance
        
               | dcow wrote:
               | Which generally covers civil lawsuits up to the point
               | where it's determined that the doctor was grossly
               | negligent. Then they aren't allowed to practice anymore.
        
             | mdaidc wrote:
             | What if I told you that an adversary can place explosives
             | at different parts of the bridge and set them off to see if
             | the bridge goes down? and they can try over and over again
             | until they succeed once, and then you are screwed.
             | 
             | this is the asymmetric nature of the threat. The
             | governments of the west need to align and go after the
             | offenders or sanction countries that give them refuge. It's
             | impossible for a single company to defend itself against
             | adversaries that can keep trying over and over again with
             | impunity.
        
             | [deleted]
        
             | charcircuit wrote:
             | >Engineers aren't allowed "one small fuckup" when building
             | bridges and buildings
             | 
             | Yes they are. When estimating the forces you need to handle
             | you can round up. This means that if you made a small
             | mistake it like won't matter since you planned to handle
             | much more than necessary.
             | 
             | >in the OR or when calculating dosages
             | 
             | Dosages don't need to me exact. There are some tolerances
             | and it's easy to imagine a system to eliminate most of
             | human error from this process.
             | 
             | >Pilots aren't allowed small fuckups when flying planes.
             | 
             | Sure they are. A mistake doesn't mean your plane will crash
             | or fall out of the sky. Planes can already fly themselves.
             | 
             | >Why should we tolerate small fuckups and negligence that
             | only serves a company's shareholders' interests?
             | 
             | Because hindsight is 20/20.
        
               | cuu508 wrote:
               | "Any idiot can build a bridge that stands, but it takes
               | an engineer to build a bridge that barely stands." -
               | Unknown
        
             | markus_zhang wrote:
             | > Engineers aren't allowed "one small fuckup" when building
             | bridges and buildings. Doctors aren't allowed "one small
             | fuckup" in the OR or when calculating dosages.
             | 
             | I do agree with the rest of the post but this is definitely
             | NOT true. Your house probably has small issues here and
             | there if you are willing to pay for an inspector. Doctors
             | fucking up things is not news too, plus you never know if
             | there is "one small fuckup" as long as you don't feel
             | particularly bad afterwards.
             | 
             | The thing is I think you guys regard real world engineering
             | too high. Small fuck ups happen all day.
        
               | nousermane wrote:
               | Exactly. Threat modelling is a thing outside IT secutiry
               | too.
               | 
               | Youtube channel "MentourPilot" [0] has some excellent
               | examples of how civil aviation was built to withstand
               | "small fuckups": with "Swiss cheese" model. [1]
               | 
               | [0] https://www.youtube.com/c/MentourPilotaviation/videos
               | 
               | [1] https://en.wikipedia.org/wiki/Swiss_cheese_model
        
             | MattGaiser wrote:
             | Engineers, doctors, and pilots are not dealing with an
             | equally clever human on the other end trying to beat them.
             | They wouldn't do well if someone were casually changing a
             | number in their calculations or swapping bottles or
             | screwing with the instruments.
        
             | ByteJockey wrote:
             | > Engineers aren't allowed "one small fuckup" when building
             | bridges and buildings. Doctors aren't allowed "one small
             | fuckup" in the OR or when calculating dosages. These
             | mistakes are grave and should be costly. Pilots aren't
             | allowed small fuckups when flying planes.
             | 
             | That's BS and you know it. Any system that relies on humans
             | being perfect fails rather quickly.
             | 
             | While major incompetence is generally punished, all of
             | those systems you describe have ways built in to deal with
             | routine human failures.
        
             | [deleted]
        
             | secondaryacct wrote:
             | You wrote a long diatribe I admit I didnt read but the
             | first paragraph, but just to say this: this is not
             | profitable because competitors dont do it. If competitors
             | have to do it because no insurance, suddenly the cycle
             | starts and everyone starts doing it.
             | 
             | It s a risk assessment, as we pile crap on top of crap in
             | deeper and deeper systems, security holes grow more
             | numerous to a point it will be profitable to cover them
             | better than the competition.
        
             | dboreham wrote:
             | A bridge is O(10^6) less complex than a F500 IT setup.
        
             | sokoloff wrote:
             | > Pilots aren't allowed small fuckups when flying planes.
             | 
             | Pilots are allowed small fuckups _all the time_ when flying
             | planes. I 've got around 1500 hours and haven't had a
             | perfect flight yet. When I stop flying, I expect to have
             | 3000 hours and still to not have had a perfect flight.
             | 
             | Aviation is designed around having redundancies,
             | tolerances, and contingencies for these small fuckups.
        
               | rollcat wrote:
               | > Aviation is designed around having redundancies,
               | tolerances, and contingencies for these small fuckups.
               | 
               | Are there lessons for software engineering to be learned
               | from aviation?
        
               | sokoloff wrote:
               | I think so. I've enshrined an adaptation of FAR 91.3 into
               | our technical operations handbook (which is incorporated
               | into our SOX compliance policies). The intention was to
               | make sure that a specific group is clearly,
               | unambiguously, and completely in charge. They are the
               | final authority as to the production operation. (We call
               | this group "Problem Management", though in the ITIL
               | framework this is more akin to critical incident
               | management)
               | 
               | "Problem Management is exercising emergency authority
               | here" clears away any ambiguity and the confidence this
               | gives them means they are free to act to resolve an
               | incident rather than wonder who they need to call to give
               | the blessing.
               | 
               | https://www.law.cornell.edu/cfr/text/14/91.3
               | 
               | When onboarding new members to that group, we go over
               | this, including the origin story as part of helping them
               | understand how (and that) we expect them to use this.
               | 
               | In other areas? Probably. Aviation tries not to bet lives
               | on a single piece of equipment working perfectly. Where
               | that's unavoidable (like the Jesus nut and pin on a
               | helicopter), inspection and maintenance procedures are
               | more stringent.
        
               | marcosdumay wrote:
               | For software reliability, many of them. A large number
               | are mainstream. If you take a distributed systems book,
               | you will see many, and test suites are an evolution of
               | the aviation checklists.
               | 
               | For security, I don't think there are any. Aviation isn't
               | great on security anyway.
        
               | klodolph wrote:
               | SREs I know talk about aviation incidents all the time,
               | spinning it into a "what lessons do we learn?"
               | 
               | - Aviation incidents are investigated and the
               | investigation is typically bent on discovering root
               | causes, not assigning blame. Do the same thing with bugs
               | & security breaches, you'll learn more.
               | 
               | - Pilots spend time in simulators practicing unusual
               | situations like water landings or engine failure--do the
               | same thing in software, simulate failure scenarios and
               | intrusion events and see how your engineers and
               | technicians handle those events. You develop new training
               | materials and redesign systems based on what you learned
               | in simulations.
               | 
               | - Pilots rely heavily on checklists rather than memory to
               | ensure that all steps have been completed. This
               | translates well to QA, deployment, various tasks people
               | you need to do in production like migrations or whatnot.
               | 
               | - Pilots require good UX to do their job and cannot be
               | overloaded with too many tasks. Sometime an incident will
               | be traced to dangerous things that don't seem dangerous
               | when you look at the UI (like thrust reversers), or
               | traced to systems that assigned too many duties to pilots
               | (who get overloaded). For example, if you have a button
               | that deletes a database, it should require a serious
               | confirmation step, like a molly guard on a plane.
               | 
               | - Tasks must be explicitly delegated to individual people
               | (which is what happens on an airplane). For example, in
               | the Gimli Glider incident, the flight engineer role was
               | eliminated and the engineer's responsibilities were not
               | explicitly delegated to the pilot or copilot, but
               | assigned to "both". Same should be true for tasks in
               | software engineering and incident response.
               | 
               | We also talk about how medicine seems uninterested in
               | learning these lessons, for various reasons.
        
           | indymike wrote:
           | > I think this does not acknowledge how incredibly asymmetric
           | the difficulty is on each side. It is expensive and difficult
           | to protect against security breaches,
           | 
           | I'm not an security guy, but occasionally, I get asked by a
           | friend that has a big IT problem for a second opinion, and
           | that has happened with ransomware six times in thee last
           | three years. From what I've seen, ransomware attacks are the
           | result of a conspiracy of neglect and a couple global IT
           | mistakes:
           | 
           | * Backup/restore doesn't work at all, which is really the
           | story on every ransomware attack I've seen. Testing and
           | verifying operation was neglected.
           | 
           | * Services are not partitioned in a way that isolates them
           | from attack. For example, backups are stored on a writable
           | network share leading to the backup being encrypted by the
           | ransomware. This is often because getting access control on a
           | service right takes time, and does make it harder for users
           | when they are doing something out of the ordinary.
           | 
           | * User access is read write on everything and uses shared
           | credentials (for example, using the same database user
           | account, with admin rights on all services that touch a
           | database server). This makes it easy for the IT guy, and
           | makes it very easy for the ransomware to spread through the
           | network and take down stuff it shouldn't be able to touch:
           | like the tables for your billing system.
           | 
           | * There's a false belief that you can isolate an insecure
           | network for the secure one. This may work (in theory) in a
           | data center where there is ironclad control of what is
           | connected to the network, but in an office environment, this
           | is impossible. Companies who run their internal network like
           | an ISP where nothing is trusted do a lot better.
           | 
           | * Out of date server software has massive vulnerabilities and
           | hasn't been updated in years and years. This is often because
           | someone customized something and did not maintain it, and
           | eventually, it became impossible to upgrade. So, the admin
           | leaned in on the belief that they could secure the system at
           | the network level.
           | 
           | Insurance companies will have to insist on a few practices if
           | they want to actually be able to sustainably insure against
           | ransomware:
           | 
           | * Working backup / restore * No shared admin credentials *
           | Zero trust on the LAN
        
           | throwawayay02 wrote:
           | Then build your company in a way a security vulnerability
           | won't end it. Don't save unnecessary data, don't store
           | everything in the same place, make backups. It being harder
           | is no excuse to making a shody work. "Move fast and break
           | things" is a terrible motto both for the company and clients.
        
           | staticassertion wrote:
           | > It is expensive and difficult to protect against security
           | breaches,
           | 
           | No it isn't, it's super cheap and dead simple. What's
           | expensive and difficult is fixing security problems super
           | late in the game. Undoing bad security is hard. Doing it
           | right is really simple.
           | 
           | For example, we have 2FA everywhere at my company. It's 10
           | people, so that's easy, and it always will be. If we were
           | 2,000 people I'd have to go through hoops and it'd be a whole
           | mess to roll out 2FA everywhere. By using U2F 2FA from day
           | one we've more or less eliminated credential theft as a
           | threat. By disabling app execution we've eliminated malware
           | as a threat.
           | 
           | Right off the bat the vast majority of attacks just don't
           | work, and those were trivial to implement. We do _way_ more
           | than that, and it was all dead simple.
           | 
           | > One small fuckup is sufficient to have the whole thing
           | compromised
           | 
           | It definitely shouldn't be.
           | 
           | Companies flat out do not care, in part because there aren't
           | consequences for not caring. Otherwise they'd do something
           | about it. Even if you're on an older network where you've got
           | AD and garbage like that you can do a lot to improve things.
        
             | Closi wrote:
             | > No it isn't, it's super cheap and dead simple. What's
             | expensive and difficult is fixing security problems super
             | late in the game. For example, we have 2FA everywhere at my
             | company. It's 10 people, so that's easy, and it always will
             | be. If we were 2,000 people I'd have to go through hoops
             | and it'd be a whole mess to roll out 2FA everywhere.
             | 
             | Lots of companies were founded prior to the popularisation
             | of 2FA, and these security standards change over time.
             | 
             | Making new applications secure by modern-day standards
             | might be relatively simple - although you are still exposed
             | to security risks if one of your vendors has a
             | vulnerability (you don't often get to see the codebase of
             | your vendors, so a zero day can hit hard).
             | 
             | Then keeping a legacy infrastructure, older codebase or
             | historical on-premise applications (where the vendor may
             | not even exist anymore) secure is more difficult. And all
             | those solutions were 'secure' by the standards of when they
             | were implemented, just times have changed.
             | 
             | And then on top of that, we are talking about Ransomware
             | hackers which are buying zero-days for host operating
             | systems - and at that point all bets are off. Let's not
             | forget, you just needed 1 machine of the 2,000 to be 2
             | months out of date on patches and you were susceptible to
             | WannaCry.
        
               | staticassertion wrote:
               | > Lots of companies were founded prior to the
               | popularisation of 2FA.
               | 
               | Like I said, what's hard is undoing bad security. Still,
               | I know companies that are nearly a century old that have
               | rolled out strong 2FA policies across 10's of thousands
               | of workers across the globe. They do it because they
               | value security.
               | 
               | It's much harder to fix a bad network, but it's still
               | just a matter of effort. It's not like we don't know
               | _how_ to do it, it 's a matter of effort.
        
           | [deleted]
        
           | everdrive wrote:
           | People always talk about how asymmetrical the attack surface
           | is, and I don't disagree with this in principle. However,
           | most companies don't do a lot of the basics: Segment
           | networks, keeps applications and software up to date, keep
           | admin or management interfaces unavailable.
           | 
           | The biggest step that I think most companies could benefit
           | from without drastically increasing their cybersecurity
           | spending would simply be to attempt to deploy and use less
           | software and fewer services. Each new piece of software is
           | just waiting to be exploited, waiting to be out of date and
           | forgotten. It'll become part of someone's "necessary"
           | workflow, but it will never be important enough that it's
           | simply kept up to date and managed well. If that software
           | were never there in the first place, the business would
           | simply use some other workflow process, and there would be
           | nothing to be forgotten and exploited.
        
           | pid-1 wrote:
           | I've seen a few successful attacks in wealthy corps and none
           | of them were part of a North Korean plot. Rather they were
           | caused by business still struggling with basic stuff like
           | MFA, password complexity, device management, user
           | permissions, software patching, etc...
        
             | dan-robertson wrote:
             | Right, I don't really believe the narrative about foreign
             | countries carrying out these attacks. I think they are
             | sufficiently cheap that they can be done by criminals In
             | countries where it is relatively safe for them to act.
        
           | RHSeeger wrote:
           | > I think this does not acknowledge how incredibly asymmetric
           | the difficulty is on each side. It is expensive and difficult
           | to protect against security breaches, even with specialised
           | software and expensive dedicated cybersecurity teams.
           | 
           | I'm curious how much the "locked door" idea comes into play
           | here. Locking your doors doesn't keep out thieves. Rather, it
           | makes you less of a target because you're more difficult than
           | others that didn't lock their doors. Is it reasonable to
           | assume that, the more you protect your system, the less
           | likely you are to be attacked... even if you don't do a
           | perfect job.
           | 
           | I would guess the answer is a definitive yes.
        
           | Rd6n6 wrote:
           | What backup strategy do you need for ransom ware? I assume
           | you need incremental backups so your newer corrupted files
           | don't overwrite your older working files?
        
             | Xylakant wrote:
             | Backup software that allows for write-once-never-modify is
             | great in that regard. One of my favorite features of
             | tarsnap: you can separate privileges for writing backups
             | and removing. Introduces a bit of overhead, though.
        
           | TheCoelacanth wrote:
           | You can't completely eliminate the risk, but simply having
           | regular backups and actually testing the process for
           | restoring from them to make sure that they work can greatly
           | mitigate the damage from ransomware attacks.
        
           | qaq wrote:
           | If you think the state of physical security is much better I
           | have some disturbing news for you
        
           | mannykannot wrote:
           | It is not clear to me that OP's position fails to acknowledge
           | the difficulties. It is, rather, the tacit assumption that
           | this problem could be mitigated through insurance, in the way
           | that comparatively low-level scams and thefts are, that fails
           | to recognize the magnitude of the threat.
        
             | pfortuny wrote:
             | I am amazed that an insurance company is keen to make such
             | a lose-lose deal, honestly, without proper verification of
             | the capabilities of the insured person.
             | 
             | Well, they seem to be learning now.
        
         | rectang wrote:
         | Nation states State level actors are already involved, through
         | sheltering criminals seen to be damaging the economies of other
         | states.
         | 
         | https://en.wikipedia.org/wiki/Maksim_Yakubets
         | 
         | The burdens of heated economic warfare are falling on private
         | companies, because laws against ransomware can't practically be
         | enforced. If these crimes were taking place entirely within the
         | sphere of a single state, capturing the perpetrators wouldn't
         | be completely impossible as it is now.
         | 
         | Is your company prepared to be raided by a highly trained
         | state-sponsored militia? Would we expect every company to
         | ensure that its facilities can withstand artillery bombardment?
        
           | dan-robertson wrote:
           | You write nation state but link to an article about a
           | Russian. Outside of the United States[1], 'nation state' is
           | not synonymous with 'country' (for example Russia and Iran,
           | two commonly mentioned adversaries, are multiethnic states
           | rather than nation states).
           | 
           | If you don't want to write 'country' because it isn't fancy
           | enough, you can say 'state level actor' or 'government
           | actor'.
           | 
           | I realise this is a stupid nitpick but this is a hill I am
           | willing to die on.
           | 
           | [1] lots of people in the national security sphere in the US
           | like to use the term 'nation state' but they also use lots of
           | stupid terms that don't seem so much into everyday language.
           | I do notice some media outlets or government sources outside
           | of the US using alternative terms like the ones suggested
           | above.
        
             | Apes wrote:
             | I'm not sure why you went on this massively pedantic rant,
             | or why you chose _Russia_ as the hill to die on.
             | 
             | Russia is actually a fairly good example of a Nation State.
             | It has a largely homogeneous population of Russian speaking
             | Rus people, and the minorities mostly live in autonomous
             | regions that were created to be run by the various ethnic
             | minorities in Russia. There's even a Jewish autonomous
             | region in Russia. Russia is pretty solidly in the Nation
             | State category.
        
             | TheGigaChad wrote:
             | You are a FUCKING IDIOT.
        
             | mistrial9 wrote:
             | I agree with the post here -- it is a terrible error to
             | brand an entire nation as "criminal" when it is antagonists
             | within doing antagonistic things. This is fundemental to
             | the concept of Justice -- on the other side, it is
             | fomenting prejuidice directly to tar an entire set of the
             | world, wherever.. thirdly, it ignores a lot of context
        
             | xrisk wrote:
             | I don't see how being multiethnic makes Russia or Iran not
             | have national identities (the requirement for being a
             | nation state).
        
               | rectang wrote:
               | It's fine, I'll go change it. Let's not get distracted by
               | a pedantic side issue.
        
             | galangalalgol wrote:
             | Why isn't Russia a nation? Its a large group of people
             | ruled by a single government. Isn't that a nation?
        
               | WastingMyTime89 wrote:
               | A nation is a loosely defined concept popularised at the
               | end of the nineteenth century by the French and German
               | with each a different definitions relating to a mostly
               | fantasied shared history and vague appeal to a common
               | identity and whose usefulness diminishes quickly when
               | it's not about justifying war and persecution.
               | 
               | Meanwhile what you are talking about is a country: a
               | sovereign legal entity which actually exists.
        
               | lottin wrote:
               | A nation is about a shared identity that binds people
               | together, not necessarily ruled by a single government.
               | For example, the Kurdish nation is spread across several
               | states, so not ruled by a single government but a nation
               | nonetheless. Whereas states can encompass various
               | nations. In short, the Russian nation is different from
               | the Russian state, and calling the Russian state a
               | nation-state can be misleading.
        
               | galangalalgol wrote:
               | American Heritage says a nation is a large group of
               | people with one government. Merriam Webster says a nation
               | is an area of land with one government. Leaving people
               | out entirely. Cambridge dictionary has the combination of
               | those as the first definition, but adds the cultural
               | definition second. It seems like this word is flexible
               | enough in common usage to cover both uses with context
               | necessary to distinguish usage. Dieing on that hill seems
               | excessive.
        
               | lottin wrote:
               | It depends, I think the distinction is more important if
               | you're a member of an endangered, stateless nation,
               | because in that case the term nation-state implies that
               | your people does not exist as a nation, whereas if you
               | belong to the dominant nation within a state, then it's
               | the other way round, it's in your interest to reinforce
               | the idea that there's one nation (as opposed to multiple
               | ones), and therefore prefer the term nation-state.
        
               | Mountain_Skies wrote:
               | Sounds more like a country than a nation. Nations are
               | more cultural which is why there can be a Navajo nation
               | or a Kurd nation without there being a Kurdistan and the
               | Navajos being part of the United States with limited
               | autonomy. As is the case with anything in this sphere,
               | the edges can be a bit fuzzy and not everyone agrees on
               | the definition of nation.
        
         | unclebucknasty wrote:
         | This blaming of companies and effective absolving of criminals
         | _every single time_ is ridiculous and I don 't believe the
         | etiology of this "sentiment" is organic. It has the hallmark
         | signs of Russian propaganda. Deny and redirect blame.
         | 
         | At the end of the day, this is a national security and
         | organized crime problem. Russia and other adversarial
         | governments are either directly behind the attacks or openly
         | sanction them. It makes no sense to blame the companies, any
         | more than it would to blame them for having a bomb dropped on
         | them.
         | 
         | We need to be rallying behind our companies and our governments
         | to defend the interests we should all share as people of good
         | faith, whose prosperity and well-being are under attack. These
         | attacks represent assymetrical battlefield actions and we
         | should be rallying around the idea of defense plus effective
         | deterrent and offensive capabilities.
         | 
         | Stop cheering on criminals and giving a pass to openly hostile
         | regimes. It's past time to give Russia and others beligerents
         | the smack in the mouth they've earned.
        
           | goldenkey wrote:
           | A teenager can do these attacks. Quit using boogeyman as
           | excuses.
        
             | unclebucknasty wrote:
             | > _A teenager can do these attacks. Quit using boogeyman as
             | excuses._
             | 
             | This is exactly the kind of propaganda I'm referring to.
             | 
             | We all know that sophisticated attacks exist, the attack
             | surface is vast, and it is extraordinarily difficult to
             | indefinitely secure that attack surface against determined
             | adversaries.
             | 
             | We also know that foreign adversaries, and not "teenagers",
             | are responsible for the overwhelming number of these
             | attacks.
             | 
             | But, here we have an apologist for those actors who blames
             | the victims and calls the simple identification of the
             | criminals "using boogeymen as excuses".
             | 
             | It's gaslighting propaganda that has a consistent quality,
             | and its origin is not organic.
        
               | staticassertion wrote:
               | It might be underselling their capabilities by referring
               | to them as teenagers, but you're overselling the
               | difficulty - it is not extraordinarily difficult to deal
               | with these attackers.
        
               | unclebucknasty wrote:
               | > _you 're overselling the difficulty - it is not
               | extraordinarily difficult to deal with these attackers._
               | 
               | Of course you'd say that, because you're vastly
               | underselling the difficulty.
               | 
               | You're not accounting for the number of attack surfaces,
               | the number of attack vectors, or variables that are
               | outside of a firm's control, such as zero days in
               | software from third party vendors.
               | 
               | Many firms also deploy tons of legacy code that they
               | depend on to operate their businesses, some of which may
               | have been developed by vendors long gone and cannot be
               | easily replaced.
               | 
               | Social engineering attacks are also becoming vastly more
               | sophisticated.
               | 
               | In general, you're not accounting for the relentlessness
               | of these actors. Sure, it's easy to mock a company when
               | they are sniped over some low hanging fruit, but I've
               | been on the front lines of having to deal with these
               | types and it's nonstop cat and mouse. They only have to
               | be right once and most small companies don't stand a
               | chance.
               | 
               | But, none of that is really the point. The real point is
               | that the blame is not with the victims, but with the
               | criminals and the nations who sponsor them So, we should
               | respond accordingly, instead of accepting or
               | regurgitating their victim-blaming propaganda.
               | 
               | I know at least four people who work at companies that
               | were attacked over the last few years. Two of them are
               | small businesses that sustained devastating losses and
               | lost time.
               | 
               | The economics hurt all of us, raise prices and can cost
               | lives. These are attacks on society's collective
               | security. What's so hard about holding these criminials
               | and their sponsors accountable?
        
               | staticassertion wrote:
               | > Of course you'd say that, because you're vastly
               | underselling the difficulty.
               | 
               | Well, I don't really think so, obviously. And where you
               | made an affirmative assertion ie: "it is extraordinarily
               | hard" I just made a negative assertion "it isn't". I
               | didn't qualify how hard it is.
               | 
               | But regardless, > You're not accounting for
               | 
               | I am. Been in infosec my whole career and well before it.
               | Been a CEO of an infosec company for two years now.
               | 
               | > number of attack surfaces, the number of attack vectors
               | 
               | The major threats are the same for virtually every
               | organization. Phishing and malware. Both have extremely
               | effective measures that any organization can roll out:
               | 
               | 1. U2F (unphishable credential)
               | 
               | 2. Default-deny execution (99% of malware is dead, and
               | you now control more of your attack surface)
               | 
               | > such as zero days in software from third party vendors.
               | 
               | You can defend against this in a number of ways as well.
               | I do feel that vendors are often the weak link.
               | 
               | > Many firms also deploy tons of legacy code that they
               | depend on to operate their businesses, some of which may
               | have been developed by vendors long gone and cannot be
               | easily replaced.
               | 
               | That was a mistake on their part. Even still, you don't
               | have to replace it to make it safer. You can isolate it,
               | build around it, etc.
               | 
               | Not to mention, very little software is truly
               | irreplaceable.
               | 
               | > he real point is that the blame is not with the
               | victims, but with the criminals and the nations who
               | sponsor them
               | 
               | OK but that's a different point than what you originally
               | made. You stated that it is extremely difficult to defend
               | against these attacks, I'm saying it isn't. Whether one
               | should have to defend against them or not really wasn't
               | your point, even if now you say it is.
               | 
               | > wo of them are small businesses that sustained
               | devastating losses and lost time.
               | 
               | It's an awful thing. They have my sympathy.
               | 
               | > What's so hard about holding these criminials and their
               | sponsors accountable?
               | 
               | Right, so, here's the deal.
               | 
               | 1. Many of the breached companies are not really
               | 'victims'. Instead it is their users who are victims. So
               | we hold them accountable _because it is their
               | responsibility_ to not let their users ' data get owned.
               | That's on them.
               | 
               | 2. We can't hold attackers accountable for a number of
               | reasons. Maybe in a moral sense we can, but in a
               | practical sense we have to take precautions.
               | 
               | I would never blame an end user, a singular person, for
               | getting owned. It's not their job to protect themselves
               | from the world.
               | 
               | I'll absolutely blame companies (ones with user data) who
               | get owned because when you sign up for a company you're
               | taking on a number of additional obligations and
               | responsibilities.
        
         | fisherjeff wrote:
         | I don't think the businesses that are buying insurance (my own
         | included) are "just relying on insurance payments." It's just
         | an additional layer of risk mitigation - not sure why there's
         | so much derision in many of the comments here.
        
         | tfehring wrote:
         | There's nothing inherently unsustainable about relying on
         | insurance for this type of thing. Arguably, properly priced
         | cyber coverage would _increase_ companies' incentive to invest
         | in good security, since it translates a possible cost in the
         | future to a certain cost today in the form of higher premium.
         | 
         | The problem is that cyber risk is (1) effectively impossible to
         | model due to a lack of representative data and (2) probably
         | highly correlated between companies, meaning that a
         | vulnerability in a widely-used library or platform could mean
         | massive systemic risk for insurers. As a result, premiums
         | probably don't align well with the underlying risk, even after
         | the corrections described in the article. Profits in cyber
         | insurance were very high (think combined ratios in the 50s-60s)
         | and stable for a long time, but that ship seems to have sailed.
        
           | toss1 wrote:
           | >>There's nothing inherently unsustainable about relying on
           | insurance for this type of thing.
           | 
           | The counter to this is the behavior of the ransomware gangs -
           | they increase their demand to the available insurance. So,
           | when everyone has insurance, everyone is a target, and
           | promptly pays out to the max insurable level (higher payouts
           | get to be too much trouble for the ransomers). As this scales
           | up, either the insurance company goes bankrupt, or the
           | premium goes up to the same as the payout.
           | 
           | Insurance is not the fix and arguably makes it a lot worse,
           | as this provides a stream of funds to aid the ransomware
           | gangs win the arms race. I'm not usually enthusiastic about
           | law enforcement first policies, but it does seem from a game
           | theory perspective that it'd be more effective to outlaw
           | ransomware payments to choke funds from the criminals, and
           | aggressively hunt them. Possibly fines doubling the ransom
           | paid would help fund the fight on the enforcement side?
        
             | TheCoelacanth wrote:
             | Insurance to pay off the gangs makes things worse, but
             | insurance to pay for the damage caused by refusing to pay
             | wouldn't.
        
               | toss1 wrote:
               | Yes, good point.
               | 
               | Although paying only for recovery requires a lot of
               | access to internal accounting information to ensure that
               | the victim is actually paying for recovery and not paying
               | off the gangs in secret. And there's a pretty strong
               | motive for the victim to do this.
               | 
               | While I find insurers are far too often in the category
               | of scammers (e.g., see recision of fully paid health
               | insurance policies on bogus criteria after an insured
               | person gets an expensive disease), I don't envy them
               | here. I don't see how an insurer is to make a product
               | profitable (i.e., worth selling) when they are
               | specifically & systematically targeted by these gangs.
        
           | elliekelly wrote:
           | This is a really interesting point. Arguably the entity in
           | the best position to "purchase" the insurance (and indemnify
           | users) are the software companies rather than the licensees
           | of the software.
        
             | rectang wrote:
             | At which point, we'll discover that selling such indemnity
             | guarantees as part of software is ludicrously unprofitable
             | and unsustainable, just like the insurance industry has.
             | 
             | There is not a software tool which on its own can defend
             | against ransomware. The most robust defense requires a
             | total system design of "Continuous Restoration", with
             | everything backed up, with backups stored in such a way
             | that they cannot be destroyed, with backups constantly
             | being restored to production, and with periodic human
             | monitoring and manual confirmation that the system is
             | working.
        
           | rkagerer wrote:
           | _The problem is that cyber risk is effectively impossible to
           | model_
           | 
           | As an embedded consultant I've worked very closely with
           | several companies in at-risk spaces (eg. banking), and from a
           | technical / corporate-culture perspective I have a good idea
           | of which ones are less likely to be breached.
           | 
           | I think part of the challenge is there's a huge disconnect
           | between how the insurance companies work (fairly macro) and
           | the little details that really matter to assessing the
           | security posture of an individual customer. A premiums
           | discount for those who undergo intensive annual security
           | audits by respected professionals would be a good start, then
           | you can begin to rate the auditor's performance tier based on
           | the relative number of their clients impacted.
        
         | notesinthefield wrote:
         | Many in my area hedge coverages against what they estimate
         | attackers might want because there simply isn't enough talent
         | in the infosec sector. The policy always looks better than
         | increasing headcount.
        
         | alliao wrote:
         | anything untraceable is too enticing for insiders. the only way
         | to shift this dynamic is to let everyone know that the
         | companies that are hit are rife with rats eating investor value
         | by using anonymous means to transfer wealth from company to
         | insider. let them figure it out. in the mean time, unless
         | proven otherwise, I'd treat all ransomware jobs as insider
         | jobs.
        
         | yunohn wrote:
         | > I hope we soon get to the point where companies that do not
         | properly invest in cyber security get wiped out and ones that
         | take matters seriously survive
         | 
         | Could you provide some more detail on this viewpoint? Because I
         | find this quite infeasible and toxic.
         | 
         | While the most obvious security failures could be patched, it's
         | near impossible to guarantee perfect security for most
         | reasonably sized companies. And for most non-tech or smaller
         | businesses, it's prohibitively expensive.
         | 
         | Would you rather theft insurance also be outlawed, since the
         | fault is on the home owner who failed to secure their home well
         | enough?
        
           | miohtama wrote:
           | Stock buybacks are record high:
           | 
           | https://www.cnbc.com/2021/10/27/stock-buybacks-surge-to-
           | like...
           | 
           | Meanwhile cybersecurity IT talent is underpaid, understaffed
           | and hard to retain: https://leadcomm.com.br/wp-
           | content/uploads/2020/03/State-of-...
           | 
           | Companies simply does not invest enough in a critical area.
           | If the capitalism works, companies that constantly optimise
           | short term profit vs. investing survivability should
           | eventually go broke. If this does not happen then there is no
           | incentive for companies to do the right thing and invest in
           | the security in the first place.
           | 
           | It will be never 100% secure, but now we are like 50% secure
           | when we should be 95% secure.
        
             | darkwater wrote:
             | > Meanwhile cybersecurity IT talent is underpaid,
             | understaffed and hard to retain
             | 
             | If it's hard to retain means that there is someone else
             | paying better, it cannot be underpaid at the same time.
        
               | Mountain_Skies wrote:
               | I left cybersecurity and went back to software
               | development due to the difference in wages.
        
               | vsareto wrote:
               | Ditto. Good pentesters have arguably more skills than
               | myself since I just do business apps, but I made about
               | $40k less as a pentester than what I do now. Plus the
               | pentesting interviews are just as annoying as the
               | development interviews.
        
               | darkwater wrote:
               | Fair enough, I missed this possibility. I'm sorry for
               | you, hope you can go back to do what you like the most.
        
               | miohtama wrote:
               | It also might be hard to retain, because people involved
               | in cybersecurity are not giving proper authority, respect
               | or other organisational resources to do their job.
        
               | purple_turtle wrote:
               | 1) they could be changing jobs away from being security
               | researcher
               | 
               | 2) they can be changing jobs to one that is just unpaid,
               | but they are treated more seriously
               | 
               | 3) they could be changing from severely underpaid to just
               | underpaid
        
           | mlac wrote:
           | Yeah this would be bad for innovation as well. All things
           | equal, the startups that emphasize security to the detriment
           | of functionality will lose to those that get the product to
           | market faster and secure it later.
           | 
           | Not saying this is ideal (I'd love security to be a
           | competitive advantage and properly required so startups start
           | prioritizing it), but I'd say we need to focus security on
           | the platforms and tech used by most startups so that their
           | attack surface is their unique product.
        
             | miohtama wrote:
             | This is a good comment.
             | 
             | However most of ransomware targets are well-established
             | companies and already have an insurance, as the article
             | stated. The insurance pays any lack of cybersecurity on
             | behalf of them. Until it does not.
        
           | bagacrap wrote:
           | Insurance should be used in addition to security best
           | practices, not as a replacement. OP did not suggest that
           | insurance should be outlawed, but that "just relying" on it
           | should be a strategy that the market snuffs out. As a tech
           | professional, it's hard to have sympathy for companies that
           | are "hacked" via very basic exploits. I doubt Schlage would
           | have much sympathy for a burglarized homeowner who had
           | neglected to invest in locks.
        
           | elliekelly wrote:
           | With homeowners insurance you usually have to prove there was
           | some visible damage or evidence of forced entry in order to
           | make a claim from a burglary. That's because the homeowner
           | has a duty to prevent and mitigate losses. I can't leave my
           | front door unlocked and say "Take whatever you want! I can
           | just make a claim to Allstate!"
           | 
           | But that seems to be the preferred approach to cybersecurity
           | at some companies...
        
           | __s wrote:
           | Insurance coverage can be conditional on sufficient efforts
           | being made. Insuring against ransomware should come along
           | with a list of preconditions & random security auditing
           | _(sure, there 'll be some stupid rules like "no having port
           | 22 open" but if you want your own security rules you can do
           | that without insurance)_. Comparing physical security with
           | cyber security is facetious. Anyone in the world has to make
           | a serious effort to break into my home, & they'll have to do
           | it on Canadian soil. All for a pretty lousy pay off. These
           | incentives are not the same in software security
        
             | staticautomatic wrote:
             | In the insurance world, sufficient efforts aren't always
             | sufficient to mitigate this kind of problem. In commercial
             | wildfire risk and in municipal insurance (think police
             | brutality cases), for example, precautionary prescriptions
             | often aren't enough to keep entities from becoming
             | uninsurable due to things like litigation risk.
        
             | Haegin wrote:
             | Some cyber insurers are already doing this. I know at least
             | one that scans the company network when quoting and will
             | decline applicants who have open RDP detected on their
             | network for example.
        
           | dd82 wrote:
           | At coalition, we scan the infra when deciding to make an
           | automated quote or do a secondary manual review. So being
           | able to have data about what the possible insured actually
           | has in terms of hardware and practices (MFA, backups, etc) is
           | very valuable. When we decline, the applicant gets the
           | reasons why for the decline, and they're free to re-apply
           | after implementing fixes and good practices.
           | 
           | We're not demanding perfect security. We do ask for good
           | practices and evaluate risk from there. Security does not
           | need to be expensive, but many places see it as second or
           | third tier priority until shit hits the fan, and _that_ is
           | where it gets prohibitively expensive. Penny wise, pound
           | foolish.
        
             | Root_Denied wrote:
             | This is where I've been predicting the insurance side of
             | cybersec going for a while.
             | 
             | You have some baseline required security for the premium
             | that offers X coverage, and you can reduce the premium or
             | increase the coverage if you can prove Y standards have
             | been met. Get a list of approved 3rd party
             | auditors/pentesting companies, have them certify what level
             | your company is at, apply appropriate discount.
             | 
             | Insurance costs are a language that businesses speak
             | fluently, so I think this has a higher chance of moving the
             | needle on cybersecurity standards and have them actually be
             | implemented across the board.
        
           | PicassoCTs wrote:
           | In a ecosystem with permanent hostilities all creatures
           | become crabs.
           | 
           | https://en.wikipedia.org/wiki/Carcinisation
           | 
           | Its a nice example of self-taxation. If one refuses to pay
           | taxes for the police and state, one pays his gains to a
           | guarded community, a armored car and insurance against
           | criminal acts.
        
           | csydas wrote:
           | > While the most obvious security failures could be patched,
           | it's near impossible to guarantee perfect security for most
           | reasonably sized companies. And for most non-tech or smaller
           | businesses, it's prohibitively expensive. Would you rather
           | theft insurance also be outlawed, since the fault is on the
           | home owner who failed to secure their home well enough?
           | 
           | I deal with ransomware a lot with clients, and while I
           | absolutely sympathize with persons who are hit by an attack,
           | I feel there are many different aspects to this where
           | applying existing law/comparisons is more derailing than
           | helpful.
           | 
           | 1. Perfect is the enemy of good
           | 
           | I see far too many people who get so hung up on this [0] that
           | they ignore practical and easily fixable problems as a
           | result. Far too many companies get the idea they need to
           | protect from nation-state actors when, as is mostly a given,
           | it's infeasible to do so as any given nation-state is likely
           | going to be able to throw more resources at a problem than a
           | given company. Even North Korea (maybe with help from China)
           | has a fairly successful* cyberwarfare division, and they
           | really did almost get away with a major heist. [1]
           | 
           | 2. The conclusion one should draw from the above isn't that
           | it's hopeless and you shouldn't do any protection, but
           | instead, you should scale your protection for the risk you
           | carry. Nation States "likely" aren't interested in Bob and
           | Alice's Autoshop in Shucksville, USA. But ransomware gangs
           | that just scan casually for openings certainly are. The
           | answer here is consider your value practically and check your
           | risk assets; what's exposed online, what ways do outsiders
           | have into your environment?
           | 
           | 3. Avoid checklists and passive scanners
           | 
           | Passive scanners are a huge pain in the ass for my company as
           | an MSP as we get clients that end up with a checklist they
           | paid a non-trivial amount of money for and the
           | recommendations would be funny if the payment for the
           | recommendation wasn't so high. Passive scanning applications,
           | even if we assume they have good intentions, have poor
           | implementation and in fact reinforce the core problem; they
           | allow IT administrators to just not think about the situation
           | and instead fall back on the excuse of "I followed the list,
           | there was nothing I could do", even when they had gaping
           | holes in their infrastructure or extremely unsafe security
           | practices.
           | 
           | I could continue on, but I need to address the inevitable
           | comparison to other crimes that produced the term "victim
           | blaming", and while indeed ransomware attacks share some
           | similarities, I'd like to point out where I see some
           | differences.
           | 
           | 1. Victims of personal attacks only have the duty to
           | themselves; the implication is that reasonably, a person
           | should be able to exist without another infringing on their
           | freedoms and rights.
           | 
           | 2. Victims of ransomware at a business level are responsible
           | for the data of others; even if it's their own company,
           | that's employee data which likely is not theirs, employee
           | money among their resources, and so on. There is an
           | expectation I think here that they're responsible not just
           | for their own security, but have accepted and suggest
           | responsibility for the livelihood of others. That is, I
           | expect that someone who undertakes responsibility for others
           | has a duty to go beyond to protect that which belongs to
           | others. To me, this means when you accept responsibility for
           | someone else's livelihood, you don't have the right to decide
           | to expose their livelihood to potential risk without their
           | consent.
           | 
           | 3. An attack is an attack, for sure, and regardless of the
           | situation, the attacker is the one at fault. The issue for me
           | is more about those who choose to put others properties at
           | risk because of decisions the responsible person made,
           | because as above, my expectation is that when it involves
           | someone else, unless they have 100% transparency into
           | decisions I make, I have a duty to make the most responsible
           | decisions for a given situation. If a sysadmin/netadmin
           | implements lazy security because it's convenient for them and
           | gets attacked, yes, I hold the attacker accountable. But I
           | also then hold the admins accountable for not being
           | responsible with my property.
           | 
           | The responsibility for others is where I see that there's a
           | significant difference, as it's no longer about personal
           | liberty and freedom to exist online as much as it is about
           | "did you take the reasonable steps to protect the data you
           | are the protector of?" Me walking down the street, I should
           | have an expectation I shouldn't be attacked, and I should
           | have recourse if I am. If my data is exfiltrated by an attack
           | that someone holding my data was a victim of, I have a right
           | to know if that person was negligent while they were in
           | charge of my data, and if they were found to be committing
           | obvious flaws, despite that they were attacked, I have a
           | right to know this and likely should have some recourse.
           | 
           | [0] - "it's near impossible to guarantee perfect security"
           | [1] - https://www.bbc.com/news/stories-57520169
        
             | csydas wrote:
             | Also, after reflection I suppose custody is the word I'm
             | looking for :)
        
         | markus_zhang wrote:
         | Going to extremely difficult for small-mid companies. Instead
         | we should ask government to be more serious about the threats
         | and strike out if applicable. At the same time, should also
         | find ways to monitor bitcoin movements.
        
           | dwd wrote:
           | Even more so for a boot-strapped startup - fronting 100s of
           | dollars in insurance is a big ask to get started without
           | taking on investor money.
           | 
           | In this environment would an early stage JIRA, Salesforce or
           | Bootcamp have gotten off the ground without incident?
           | 
           | Probably a place where a viable startup should be able to
           | apply for Gov funding or tax breaks for security investment,
           | much like they provide funding for R&D.
        
         | ip26 wrote:
         | One route is that insurers require due diligence from their
         | clients & employ pen-testers, perhaps adjusting your rate based
         | on the findings of the pen-testers...
         | 
         | Actually, this could be a viable business model for the white
         | hats of the world.
        
         | chii wrote:
         | > get wiped out
         | 
         | like a real biological virus, these ransomware people would
         | likely reduce their ransom to something the company can bear,
         | and leech parasitically rather than ransom so much that their
         | victims die off.
         | 
         | The cost will be borne by the ultimate consumers, as long as
         | cybersecurity remains more expensive on average than the cost
         | of the ransom.
        
         | rmah wrote:
         | This sort of attitude does not help expand the adoption of
         | better security measures by organizations. It's the
         | cybersecurity equivalent of "just say no", "this is your brain
         | on drugs" or "all sex before marriage is evil".
        
           | lima wrote:
           | Buying insurance instead of adopting better security measures
           | is not a helpful attitude either.
           | 
           | Nany companies _don 't_ invest in better security measures
           | because it's expensive to do so for decades worth of legacy
           | systems and they'd rather take a small risk and get insured
           | for it.
        
       | null_object wrote:
       | Just ban cryptocurrency
        
         | buttonpusher wrote:
         | Prohibition is not very effective against distributed anonymous
         | protocols.
        
           | goldenkey wrote:
           | A ship isn't very useful without a dock. Nor crypto without
           | user-friendly exchanges.
        
             | buttonpusher wrote:
             | There are already several unauthorized exchanges, despite
             | the popularity of the authorized ones. Banning the
             | authorized ones would be a big boon for the illegal ones.
        
       | outside1234 wrote:
       | More dividends from crypto!!!
       | 
       | (NOTE: We are all paying for this with higher prices to cover the
       | extortion schemes.)
        
       | varjag wrote:
       | Just stop using Windows already.
        
         | fortran77 wrote:
         | https://www.bleepingcomputer.com/news/security/hive-ransomwa...
        
           | varjag wrote:
           | _However, as Slovak internet security firm ESET discovered,
           | Hive 's new encryptors are still in development and still
           | lack functionality.
           | 
           | The Linux variant also proved to be quite buggy during ESET's
           | analysis, with the encryption completely failing when the
           | malware was executed with an explicit path._
           | 
           | Not a surprise really to anyone who tried to update GRUB on
           | an unfamiliar system.
        
         | user-the-name wrote:
         | The days when you could blame Windows for being more insecure
         | than other OSes are long, long gone. That hasn't been the case
         | for probably at least a decade at this point.
        
           | SquibblesRedux wrote:
           | I think there is a grain of truth in the idea of not using
           | Windows. However, as you point out, it is not just Windows.
           | All of our IT systems have evolved in fits and spurts with
           | insufficient investment in security from the ground up. It is
           | entirely plausible that most businesses are doing IT the
           | "wrong way" given the current requirements of
           | interconnectivity and global commerce.
           | 
           | While there are numerous ideas on what the "right way" to do
           | IT might look like, no one has reconciled that with the
           | economics of current business practices.
        
             | user-the-name wrote:
             | The main security benefit you get from not using Windows is
             | almost purely that Windows is the most common, and thus the
             | most targeted. And even that is changing, as Windows slowly
             | wanes in popularity and high-value targets use other
             | systems.
        
               | staticassertion wrote:
               | Not really true. There are a number of problems with
               | Windows and, more generally, the Microsoft ecosystem.
               | 
               | One clear area is that Windows has very weak separation
               | of admin from user. There are tons of great mechanisms
               | like integrity levels, but for a user's laptop Microsoft
               | does not consider UAC to be a boundary between user and
               | Admin and so escalations are ubiquitous.
               | 
               | This is certainly not the case with OSX and Linux.
               | 
               | Windows also has a litany of persistence mechanisms, far
               | more than I'm aware of with OSX and Linux. The Windows
               | registry is this incredible hive of ways to execute code.
               | 
               | There's all sorts of debt involved that really puts
               | Windows at a disadvantage, despite their significant
               | investments that we should all celebrate.
               | 
               | Once you add things like Word, Active Directory, etc, it
               | gets really bad. If you're building a traditional
               | "microsoft" network you're setting yourself up for
               | ransomware.
        
           | varjag wrote:
           | People keep repeating this hoping it'll make it true. But
           | every single system hit in a high profile ransom attack has
           | been Windows and every firsthand account of ransomware I
           | personally know of was on a Windows system. This is despite
           | the vast majority of critical infrastructure running on
           | something else than Windows, and ever-increasing penetration
           | of Apple into corporate.
           | 
           | All systems have varying degrees of vulnerability but only
           | Windows lends itself to automating them to such an
           | astonishing degree.
        
           | arpa wrote:
           | Windows still hides file extensions by default and is as
           | user-hostile, as ever.
        
         | count wrote:
         | It's not Windows as much as it is Active Directory and all-
         | your-eggs-in-one-basket authn/authz stuff for file shares.
        
           | marcosdumay wrote:
           | It' both the OS with the worst security history on the market
           | and the ill-conceived all-your-eggs-in-one-basket authn/authz
           | that depends on the OS's security.
           | 
           | In principle, centralized authn/authz isn't a bad thing.
        
       | LatteLazy wrote:
       | A few thoughts in no particular order:
       | 
       | It's pretty standard for human ransom (aka kidnapping) policies
       | to require the insured to deny they're covered to anyone who
       | asks. So who is covered and how much for is never really public.
       | I assume the same applies here.
       | 
       | Ransomware is a new threat. So there isn't a good data set to do
       | statistical analysis on yet. Doubly so as these events aren't
       | reportable etc.
       | 
       | Ransomware threat risk is quite client specific. How good is
       | infosec etc? How much do you rely on these systems? Is your
       | company an obvious target? And none of these factors can be
       | easily assessed by an insurer.
       | 
       | There are national political concerns around both the general
       | risk and specific ones. And insurers hate anything like that.
       | They won't cover acts of war, what about "acts of cyberwar"?
        
       | notpachet wrote:
       | The ENISA Threat Landscape report1, which this article cites in
       | an offhand way, is worth reading in its own right.
       | 
       | [1] https://www.enisa.europa.eu/publications/enisa-threat-
       | landsc...
        
       ___________________________________________________________________
       (page generated 2021-11-20 23:01 UTC)