[HN Gopher] The six dumbest ideas in computer security (2005)
___________________________________________________________________
The six dumbest ideas in computer security (2005)
Author : lsb
Score : 129 points
Date : 2024-07-14 05:48 UTC (17 hours ago)
(HTM) web link (www.ranum.com)
(TXT) w3m dump (www.ranum.com)
| lsb wrote:
| Default permit, enumerating badness, penetrate and patch, hacking
| is cool, educating users, action is better than inaction
| Etheryte wrote:
| I think commenting short summaries like this is not beneficial
| on HN. It destroys all nuance, squashes depth out of the
| discussion, and invites people to comment solely on the
| subtitles of a full article. That's not the kind of discussion
| I would like HN to degrade into -- if I wanted that, I'd go to
| Buzzfeed. Instead I hope everyone takes the time to read the
| article, or at least some of the article, before commenting.
| Short tldrs don't facilitate that.
| smokel wrote:
| As much as I agree with this, I must admit that it did
| trigger me to read the actual article :)
|
| I assume that in a not-so-distant future, we get AI powered
| summaries of the target page for free, similar to how
| Wikipedia shows a preview of the target page when hovering
| over a link.
| arcbyte wrote:
| While i agree with all the potential downsides you mentioned,
| i still lean heavily on the side of short summaries being
| extremely helpful.
|
| This was an interesting title, but having seen the summary
| and discussion, im not particularny keen to read it. In fact
| i would never have commented on this post except to
| reputation yours.
| omoikane wrote:
| Looks like lsb was the one who submitted the article, and
| this comment appears to be submitted at the same time (based
| on same timestamp and consecutive id in the URL), possibly to
| encourage people to read the article in case if the title
| sounded like clickbait.
| chuckadams wrote:
| One one hand I agree, on the other is that the article itself
| is pretty much a bunch of grumpy and insubstantial hot takes
| ...
| gnabgib wrote:
| (2015) Previous discussions:
|
| 2015 (114 points, 56 comments)
| https://news.ycombinator.com/item?id=8827985
|
| 2023 (265 points, 202 comments)
| https://news.ycombinator.com/item?id=34513806
| philipwhiuk wrote:
| From the 2015 submission:
|
| > Perhaps another 10 years from now, rogue AI will be the
| primary opponent, making the pro hackers of today look like the
| script kiddies.
|
| Step on it OpenAI, you've only got 1 year left ;)
| umanghere wrote:
| > 4) Hacking is Cool
|
| Pardon my French, but this is the dumbest thing I have read all
| week. You simply _cannot_ work on defensive techniques without
| understanding offensive techniques - plainly put, good luck
| developing exploit mitigations without having ever written or
| understood an exploit yourself. That's how you get a slew of
| mitigations and security strategy that have questionable, if not
| negative value.
| blablabla123 wrote:
| That. Also not educating users is a bad idea but it also
| becomes quite clear that the article was written in 2005 where
| the IT/security landscape was a much different one.
| crngefest wrote:
| I concur with his views on educating users.
|
| It's so much better to prevent them from doing unsafe things
| in the first place, education is a long and hard undertaking
| and I see little practical evidence that it works on the
| majority of people.
|
| >But, but, but I really really need to do $unsafething
|
| No in almost all cases you don't - it's just taking shortcuts
| and cutting corners that is the problem here
| blablabla123 wrote:
| The attacks with the biggest impact are usually social
| engineering attacks though. It can be as simple as shoulder
| surfing, tailgating or as advanced as an AI voice scam.
| Actually these are widely popularized since the early 90s
| by people like Kevin Mitnick
| TacticalCoder wrote:
| I don't think the argument is that dumb. For a start there's a
| difference between white hack hackers and dark hat hackers.
| Then here he's talking specifically about people who do
| pentesting known exploits on broken systems.
|
| Think about it this way: do you think Theo Deraadt (from
| OpenBSD and OpenSSH fame) spends his time trying to see if Acme
| corp is vulnerable to OpenSSH exploit x.y.z, which has been
| patched 3 months ago?
|
| I don't care about attacking systems: it is of very little
| interest to me. I've done it in the past: it's all too easy
| because we live in a mediocre work full of insecure crap.
| However I love spending some time making life harder for dark
| hat hackers.
|
| We _know_ what creates exploits and yet people everywhere are
| going to repeat the same mistakes over and over again.
|
| My favorite example is Bruce Schneier writing, when Unicode
| came out, that _" Unicode is too complex to ever be secure"_.
| _That_ is the mindset we need. But it didn 't stop people using
| Unicode in places where we should never have used it, like in
| domain names for examples. Then when you test an homoglyphic
| attack on IDN, it's not "cool". It's lame. It's pathetic. Of
| course you can do homglyphic attacks and trick people: an
| actual security expert (not a pentester testing known exploits
| on broken configs) warned about that 30 years ago.
|
| There's nothing to "understand" by abusing such exploit
| yourself besides "people who don't understand security have
| taken stupid decisions".
|
| OpenBSD and OpenSSH are among the most secure software ever
| written (even if OpenSSH had a few issues lately). I don't
| think Theo Deraadt spends his time pentesting so that he can be
| able to then write secure software.
|
| What strikes me the most is the mediocrity of most exploits.
| Exploits that, had the software been written with the mindset
| of the person who wrote TFA, would for the most part not have
| been possible.
|
| He is spot on when he says that _default permit_ and _enumerate
| badness_ are dumb ideas. I think it 's worth trying to
| understand what he means when he says "hacking is not cool".
| SoftTalker wrote:
| > My favorite example is Bruce Schneier writing, when Unicode
| came out, that "Unicode is too complex to ever be secure".
|
| The same is true of containers, VMs, sandboxes, etc.
|
| The idea that we all willingly run applications that
| continuously download and execute code from all over the
| internet is quite remarkable.
| klabb3 wrote:
| Agreed, eyebrows were elevated at this point in the article. If
| you want to build a good lock, you definitely want to consult
| the lock picking lawyer. And its not just a poor choice of
| title either:
|
| > teaching yourself a bunch of exploits and how to use them
| means you're investing your time in learning a bunch of tools
| and techniques that are going to go stale as soon as everyone
| has patched that particular hole
|
| Ah yes, I too remember when buffer overflows, xss and sql
| injections became stale when the world learned about them and
| they were removed from all code bases, never to be seen again.
|
| > Remote computing freed criminals from the historic
| requirement of proximity to their crimes. Anonymity and freedom
| from personal victim confrontation increased the emotional ease
| of crime [...] hacking is a social problem. It's not a
| technology problem, at all. "Timid people could become
| criminals."
|
| Like any white collar crime then? Anyway, there's some truth in
| this, but the analysis is completely off. Remote hacking has
| lower risk, is easier to conceal, and you can mount many
| automated attacks in a short period of time. Also, feelings of
| guilt are often tamed by the victim being an (often rich)
| organization. Nobody would glorify, justify or brag about
| deploying ransomware on some grandma. Those crimes happen, but
| you won't find them on tech blogs.
| watwut wrote:
| You do not have to be able to build actual sql injection
| yourself in order to have properly secured queries. Same with
| xss injection. Having rough ideas about attacks is probably
| necessary, but beyond that you primary need the discipline and
| correct frameworks that wont facilitate you to shoot yourself
| in the foot.
| ang_cire wrote:
| This author is not a security practitioner, they're a
| developer, and they clearly think they are super special
| snowflakes who can write _perfectly secure code_ , which is as
| good a reason to never let them write code for you as I can
| think of.
| iandanforth wrote:
| And yet the consequence of letting people like this run your
| security org is that it takes a JIRA ticket and multiple days,
| weeks, never to be able to install 'unapproved' software on you
| laptop.
|
| Then if you've got the software you need to do your job you're
| stuck in endless cycles of "pause and think" trying to create the
| mythical "secure by design" software which _does not exist_. And
| then you get hacked anyway because someone got an email (with no
| attachments) telling them to call the CISO right away, who then
| helpfully walks them through a security "upgrade" on their
| machine.
|
| Caveats: Yes there is a balance and log anomaly detection
| followed by actual human inspection is a good idea!
| dale_glass wrote:
| "We're Not a Target" isn't a minor dumb. It's the standpoint of
| every non-technical person I've ever met. "All I do with my
| computer is to read cooking recipes and upload cat photos. Who'd
| want to break in? I'm _boring_. "
|
| The best way I found to change their mind is to make a car
| analogy. Who'd want to steal your car? Any criminal with an use
| for it. Why? Because any car is valuable in itself. It can be
| sold for money. It can be used as a getaway vehicle. It can be
| used to crash into a jewelry shop. It can be used for a joy ride.
| It can be used to transport drugs. It can be used to kill
| somebody.
|
| A criminal stealing a car isn't hoping that there are Pentagon
| secrets in the glove box. They have an use for the car itself. In
| the same way, somebody breaking into your computer has uses for
| the computer itself. They won't say no to finding something
| valuable, but it's by no means a requirement.
| jampekka wrote:
| A major dumb is that security people think breaking in is the
| end of the world. For vast majority of users it's not, and it's
| a balance between usability and security.
|
| I know it's rather easy to break through a glass window, but I
| still prefer to see outside. I know I could faff with multiple
| locks for my bike, but I rather accept some risk for it to be
| stolen for the convenience.
|
| If there's something I really don't want to risk stolen, I can
| take it into a bank's vault. But I don't want to live in a
| vault.
| dale_glass wrote:
| > A major dumb is that security people think breaking in is
| the end of the world. For vast majority of users it's not,
| and it's a balance between usability and security.
|
| End of the world? No. But it's really, really bad.
|
| When you get your stolen car back, problem over.
|
| But your broken into system should in most cases be
| considered forever tainted until fully reinstalled. You can't
| enumerate badness. That the antivirus got rid of one thing
| doesn't mean they didn't sneak in something it didn't find.
| You could be still a DoS node, a CSAM distributor, or a spam
| sender.
| kazinator wrote:
| > _When you get your stolen car back, problem over._
|
| Not if it contains computers; either the original ones it
| had before being stolen, or some new, gifted ones you don't
| know about.
| jampekka wrote:
| > But your broken into system should in most cases be
| considered forever tainted until fully reinstalled.
|
| Reinstalling an OS is not really, really bad. It's an
| inconvenience. Less so than e.g. having to get new cards
| after a lost wallet or getting a new car.
|
| Security people don't seem to really assess what are the
| actual consequences of breaches. Just that they are "really
| really bad" and have to be protected against all costs.
| Often literally the cost being an unusable system.
| dave4420 wrote:
| Is reinstalling the OS enough?
|
| Isn't there malware around that can store itself in the
| BIOS or something, and survive an OS reinstall?
| Andrex wrote:
| It would need to be a zero-day (or close to it), which
| means nation-state level sophistication.
|
| You can decide for yourself whether to include that in
| your personal threat analysis.
| marcosdumay wrote:
| > Reinstalling an OS is not really, really bad. It's an
| inconvenience.
|
| Reinstalling an OS is not nearly enough. You have to
| reinstall all of them, without letting the "dirty" ones
| contaminate the clean part of your network; you have to
| re-obtain all of your binaries; and good luck trusting
| any local source code.
|
| The way most places are organized today, getting
| computers infected is a potentially unfixable issue.
| worik wrote:
| > Security people don't seem to really assess what are
| the actual consequences of breaches. Just that they are
| "really really bad" and
|
| No
|
| Security people are acutely aware of the consequences of
| a breach.
|
| Look at the catastrophic consequences of the recent wave
| of ransomware attacks.
|
| Lax security at all levels, victim blaming (they clicked
| a link....) and no consequences I know of foe those
| responsible for that bad design
| worik wrote:
| > Security people don't seem to really assess what are
| the actual consequences of breaches. Just that they are
| "really really bad" and
|
| No
|
| Security people are acutely aware of the consequences of
| a breach.
|
| Look at the catastrophic consequences of the recent wave
| of ransomware attacks.
|
| Lax security at all levels, victim blaming (they clicked
| a link....) and no consequences I know of for those
| responsible for that bad design. Our comrades built those
| vulnerable systems
| SoftTalker wrote:
| > When you get your stolen car back, problem over.
|
| > But your broken into system should in most cases be
| considered forever tainted
|
| Actually this is exactly how stolen cars work. A stolen car
| that is recovered will have a branded title from then on
| (at least it will if an insurance company wrote it off).
| smokel wrote:
| I'm afraid this is a false dichotomy.
|
| People can use HTTPS now instead of HTTP, without degrading
| usability. This has taken a lot of people a lot of work, but
| everyone gets to enjoy better security. No need to lock and
| unlock every REST call as if it were a bicycle.
|
| Also, a hacker will replace the broken glass within
| milliseconds, and you won't find out it was ever broken.
| jampekka wrote:
| It shouldn't be a dichotomy, but security zealots not
| caring about usability or putting the risks in context
| makes it such.
|
| HTTPS by default is good, especially after Let's Encrypt.
| Before that is was not worth the hassle/cost most of the
| time.
|
| E.g. forced MFA everywhere is not good.
|
| > Also, a hacker will replace the broken glass within
| milliseconds, and you won't find out it was ever broken.
|
| This is very rare in practice for normal users. Again,
| risks in context please.
| izacus wrote:
| You're ignoring that HTTPS took decades to be default
| thanks to massive work of a lot of security engineers who
| UNDERSTOOD that work and process around certificates was
| too onerous and hard for users. It took them literally
| decades of work to get HTTPS cert issuance to such a low
| cost process that everyone does it. It *really* cannot be
| understated how much important work that was.
|
| Meanwhile, other security zealots were just happy to scream
| at users for not sending 20 forms and thousands of dollars
| to cert authorities.
|
| Usability matters - and the author of this original rant
| seems to be one of those security people who don't
| understand why the systems they're guarding are useful,
| used and how are they used. That's the core security cancer
| still in the wild - security experts not understanding just
| how transparent the security has to be and that it's
| sometimes ok to have a less secure system if that means
| users won't do something worse.
| dspillett wrote:
| _> I know it 's rather easy to break through a glass window,
| but I still prefer to see outside._
|
| Bad analogy. It is not that easy to break modern multi-layer
| glazing, and it is also a lot easier to _get away with_
| breaking into a computer or account than breaking a window,
| undetected, until it is time to let the user know (for a
| ransom attempt or other such). Locking your doors is a much
| better analogy. You don 't leave them unlocked in case you
| forget your keys do you? That would be a much better analogy
| for choosing convenience over security in computing.
|
| _> I know I could faff with multiple locks for my bike, but
| I rather accept some risk for it to be stolen for the
| convenience._
|
| Someone breaking into a computer or account isn't the same as
| them taking a single object. It is more akin to them getting
| into your home or office, or on a smaller scale a briefcase.
| They don't take an object, but that can collect information
| that will help in future phishing attacks against you and
| people you care about.
|
| The intruder could also operate from the hacked resource to
| continue their attack on the wider Internet.
|
| _> A major dumb is that security people think breaking in is
| the end of the world._
|
| The major dumb of thinking like this is that breaking in is
| often not the end of anything, it can be the start or
| continuation of a larger problem. Security people know this
| and state it all the time, but others often don't listen.
| jampekka wrote:
| > The major dumb of thinking like this is that breaking in
| is often not the end of anything, it can be the start or
| continuation of a larger problem. Security people know this
| and state it all the time, but others often don't listen.
|
| This is exactly the counter productive attitude I
| criticized. I told you why others don't often listen, but
| you don't seem to listen to that.
|
| Also, people do listen. They just don't agree.
| kazinator wrote:
| > _but I still prefer to see outside_
|
| Steel bars?
| dijit wrote:
| I think this is why the analogy holds up.
|
| Some situations definitely call for steel bars, some for
| having no windows at all.
|
| But for you and me, windows are fine, because the value of
| being inside my apartment is not the same value as being in
| a jewellers or in a building with good sight-lines to
| something even more valuable -- and the value of having
| unrestricted windows is high for us.
| gmuslera wrote:
| The act of breaking in is not even the end of it. It is not a
| broken glass that you clearly see and just replace to forget
| about it. It may be the start of a process, and you don't
| know what will happen down the road. But it won't be
| something limited to the affected computer or phone.
| oschvr wrote:
| > "If you're a security practitioner, teaching yourself how to
| hack is also part of the "Hacking is Cool" dumb idea."
|
| lol'd at the irony of the fact that this was posted here, Hacker
| News...
| smokel wrote:
| At the risk of stating the obvious, the word "hacker" has (at
| least) two distinct meanings. The article talks about people
| who try to break into systems, and Hacker News is about people
| who like to hack away at their keyboard to program interesting
| things.
|
| The world might have been a better place if we used the terms
| "cracker" and "tinkerer" instead.
| jrm4 wrote:
| Doubt it; it's utterly naive and wishful thinking to think
| that those two things are easily separable; it's never as
| simple as Good Guys wear White, Bad Guys wear Black, which is
| the level of intelligence this idea operates at.
| smokel wrote:
| It might be naive, but I'm not the only one in using this
| distinction. See the "Definitions" section of
| https://en.wikipedia.org/wiki/Hacker
| moring wrote:
| > Think about it for a couple of minutes: teaching yourself a
| bunch of exploits and how to use them means you're investing your
| time in learning a bunch of tools and techniques that are going
| to go stale as soon as everyone has patched that particular hole.
|
| No, it means that you learn practical aspects alongside theory,
| and that's very useful.
| move-on-by wrote:
| I also took issue with this point. One does not become an
| author without first learning how to read. The usefulness of
| reading has not diminished once you publish a book.
|
| You must learn how known exploits work to be able to discover
| unknown exploits. When the known exploits are patched, your
| knowledge of how they occurred has not diminished. You may not
| be able to use them anymore, but surely that was not the goal
| in learning them.
| dvfjsdhgfv wrote:
| > The cure for "Enumerating Badness" is, of course, "Enumerating
| Goodness." Amazingly, there is virtually no support in operating
| systems for such software-level controls.
|
| Really? SELinux and AppArmor have existed since, I don't know,
| late nineties? The problem is not that these controls don't
| exist, it's just they make using your system much, much harder.
| You will probably spent some time "teaching" them first, then
| actually enable, and still fight with them every time you install
| something or make other changes in your system.
| kazinator wrote:
| Penetrate and Patch is a useful exercise, because it lets the IT
| security team deliver some result and show they have value, in
| times when nothing bad is happening and everyone forgets they
| exist.
| jeffrallen wrote:
| mjr (as I always knew him from mailing lists and whatnot) seems
| to have given up on security and enjoys forging metal instead
| now.
|
| > Somewhere in there, security became a suppurating chest wound
| and put us all on the treadmill of infinite patches and massive
| downloads. I fought in those trenches for 30 years - as often
| against my customers ("no you should not put a fork in a light
| socket. Oh, ow, that looks painful. Here, let me put some boo boo
| cream on it so you can do it again as soon as I leave.") as for
| them. It was interesting and lucrative and I hope I helped a
| little bit, but I'm afraid I accomplished relatively nothing.
|
| Smart guy, hope he enjoys his retirement.
| lobsang wrote:
| Maybe I missed it, but I was surprised there was no mention of
| passwords.
|
| Mandatory password composition rules (excluding minimum length)
| and rotating passwords as well as all attempts at "replacing
| passwords" are inherintly dumb in my opinion.
|
| The first have obvious consequences (people writing passwords
| down, choosing the same passwords, adding 1) leading to the
| second which have horrible / confusing UX (no I don't want to
| have my phone/random token generator on me any time I try to do
| something) and default to "passwords" anyway.
|
| Please just let me choose a password of greater than X length
| containing or not containing any chachters I choose. That way I
| can actually remember it when I'm not using my phone/computer, in
| a foreign country, etc.
| belinder wrote:
| Minimum length is dumb too because people just append 1 until
| it fits
| rekabis wrote:
| I would love to see most drop-in/bolt-on authentication
| packages (such as DotNet's Identity system) to adopt "bitwise
| complexity" as the only rule: not based on length or content,
| only the mathematical complexity of the bits used. KeePass
| uses this as an estimate of password "goodness", and it's
| altered my entire view of how appropriate any one password
| can be.
| Terr_ wrote:
| IIRC the key point there is that it's contextual to
| whatever generation method scheme you used--or at least
| what method you told it was used--and it assumes the
| attacker knows the generation scheme.
|
| So "arugula" will score is very badly in the context of a
| passphrase of English words, but scores better as a
| (supposedly) random assortment of lowercase letters, etc.
| hunter2_ wrote:
| But when someone tries to attack such a password, as long as
| whatever the user devised isn't represented by an entry in
| the attack dictionary, the attack strategy falls back to
| brute force, at which point a repetition scheme is irrelevant
| to attack time. Granted, if I were creating a repetitive
| password to meet a length requirement without high mental
| load, I'd repeat a more interesting part over and over, not a
| single character.
| borski wrote:
| Sure. But most people add "111111" or "123456" to the end.
| That's why it's on top of every password list.
| fragmede wrote:
| on the other hand, to gave us the password game, so there's
| that.
|
| https://neal.fun/password-game/
| temporallobe wrote:
| I've been preaching this message for many years now. For
| example, since password generators basically make keys that
| can't be remembered, this has led to the advent of password
| managers, all protected by a single password, so your single
| point of failure is now just ONE password, the consequences of
| which would be that an attacker would have access to all of
| your passwords.
|
| The n-tries lockout rule is much more effective anyway, as it
| breaks the brute-force attack vector in most cases. I am not a
| cybersecurity expert, so perhaps there are cases where high-
| complexity, long passwords may make a difference.
|
| Not to mention MFA makes most of this moot anyway.
| nouveaux wrote:
| Most of us can't remember more than one password. This means
| that if one site is compromised, then the attacker now has
| access to multiple sites. A password manager mitigates this
| issue.
| userbinator wrote:
| Vary the password per site based on your own algorithm.
| jay_kyburz wrote:
| AKA, put the name of the site in the password :)
| userbinator wrote:
| Not necessarily, but just a pattern that only you would
| likely remember.
| cardanome wrote:
| People used to memorize the phone numbers of all important
| family members and close friends without much trouble.
| Anyone without a serious disability should have no trouble
| memorizing multiple passwords.
|
| Sure, I do use password managers for random sites and
| services but I probably have at lower double digit amount
| of passwords memorized for the stuff that matters.
| Especially for stuff that I want to be able to access in an
| emergency when my phone/laptop gets stolen.
| borski wrote:
| > so your single point of failure is now just ONE password,
| the consequences of which would be that an attacker would
| have access to all of your passwords.
|
| Most managers have 2FA, or an offline key, to prevent this
| issue, and encrypt your passwords at rest so that without
| that key (and the password) the database is useless.
| nottorp wrote:
| > and encrypt your passwords at rest
|
| I haven't turned off my desktop this year. How does
| encryption at rest help?
| doubled112 wrote:
| My password manager locks when I lock my screen. You can
| configure it to lock after some time.
|
| The database is encrypted at rest.
| necovek wrote:
| Locking is not sufficient: it would need to overwrite the
| memory where passwords were decrypted to. With virtual
| memory, this becomes harder.
| compootr wrote:
| What's sufficient depends on your threat model.
|
| Normal dude in a secure office? An auto-locking password
| manager would suffice.
|
| Someone that should be concerned with passwords in-memory
| is someone who believes another has full physical access
| to their computer (and can, say, freeze RAM in nitrogen
| to extract passwords
|
| My largest concern would be an adversary snatching my
| phone while my password manager was actively opened
| freeone3000 wrote:
| But still not particularly hard. mmap has a MMAP_FIXED
| flag for this particular reason -- overwrite the arena
| you're decrypting to, and you should be set.
| marshray wrote:
| When your old hard drive turns up on ebay.
| nottorp wrote:
| It's not safe to sell SSDs is it?
|
| And even if it were, who would buy a used SSD with
| unknown durability gone?
| ziml77 wrote:
| If the data was always encrypted, then simply discarding
| the keys effectively means the drive is left filled with
| random data. Also, NVMe drives can be sent the sanitize
| command which can erase/overwrite the data across the
| entire physical drive rather than just what's mapped into
| the logical view. I believe there's SATA commands to
| perform similar actions.
| raesene9 wrote:
| TBH where you see this kind of thing (mandatory periodic
| password rotation every month or two) being recommended, it's
| people not keeping up with even regulators view of good
| security practice.
|
| Both NIST in the US (https://pages.nist.gov/800-63-FAQ/) and
| NCSC in the UK
| (https://www.ncsc.gov.uk/collection/passwords/updating-
| your-a...) have quite decent guidance that doesn't have that
| kind of requirement.
| crngefest wrote:
| Well, my experience working in the industry is that almost no
| company uses good security practices or goes beyond some
| outdated checklists - a huge number wants to rotate
| passwords, disallow/require special characters, lock out
| users after X attempts, or disallow users to choose a
| password they used previously (never understood that one).
|
| I think the number of orgs that follow best practices from
| NIST etc is pretty low.
| 8xeh wrote:
| It's not necessarily the organization's fault. In several
| companies that I've worked for (including government
| contractors) we are required to implement "certifications"
| of one kind or another to handle certain kinds of data, or
| to get some insurance, or to win some contract.
|
| There's nothing inherently wrong with that, but many of
| these require dubious "checkbox security" procedures and
| practices.
|
| Unfortunately, there's no point in arguing with an
| insurance company or a contract or a certification
| organization, certainly not when you're "just" the
| engineer, IT guy, or end user.
|
| There's also little point in arguing with your boss about
| it either. "Hey boss, this security requirement is
| pointless because of technical reason X and Y." Boss: "We
| have to do it to get the million dollar contract. Besides,
| more security is better, right? What's the problem?"
| Dalewyn wrote:
| >lock out users after X attempts
|
| Legitimate users usually aren't going to fail more than a
| couple times. If someone (or some _thing_ ) is repeatedly
| failing, lock that shit down so a sysadmin can take a look
| at leisure.
|
| >disallow users to choose a password they used previously
| (never understood that one)
|
| It's so potentially compromised passwords from before don't
| come back into cycle now.
| jay_kyburz wrote:
| >disallow users to choose a password they used previously
|
| I think Epic Game Store hit me with that one the other day.
| Had to add a 1 to the end.
|
| A common pattern for me is that I create an account at
| home, and make a new secure password.
|
| Then one day I log in a work but don't have the password on
| me so I reset it.
|
| Then I try and login again at home, don't have the password
| from work, so try and reset it back to the password I have
| at home.
| II2II wrote:
| > Mandatory password composition rules (excluding minimum
| length) and rotating passwords as well as all attempts at
| "replacing passwords" are inherintly dumb in my opinion.
|
| I suspect that rotating passwords was a good idea _at the
| time_. There was some pretty poor security practices several
| decades ago, like sending passwords as clear text, which took
| decades to resolve. There are also people like to share
| passwords like candies. I 'm not talking about sharing
| passwords to a streaming service you subscribe to, I'm talking
| about sharing access to critical resources with colleagues
| within an organization. I mean, it's still pretty bad which is
| why I disagree with them dismissing educating end users. Sure,
| some stuff can be resolved via technical means. They gave
| examples of that. Yet the social problems are rarely solvable
| via technical means (e.g. password sharing).
| Jerrrrrrry wrote:
| >I suspect that rotating passwords was a good idea at the
| time.
|
| yes, when all password hashes were available to all users,
| and therefore had an expected bruteforce/expiration date.
|
| It is just another evolutionary artifact from a developing
| technology complexed with messy humans.
|
| Repeated truisms - especially in compsci, can be dangerous.
|
| NIST has finally understood that complex password
| requirements decrease security, because nobody is attacking
| the entrophy space - they are attacking the post-it
| note/notepad text file instead.
|
| This is actually a good example of an opposite case of
| Chesterton's Fence
|
| https://fs.blog/chestertons-fence/
| marshray wrote:
| It's not crazy to want a system to be designed such that it
| tends to converge to a secure state over time. We still
| have expiration dates on ID and credit cards and https
| certificates.
|
| The advantages just didn't outweigh the disadvantages in
| this scenario.
| Jerrrrrrry wrote:
| Apples to oranges.
|
| Usernames are public now.
|
| Back then, your username was public, and your password
| was assumed cracked/public, within a designated time-
| frame.
|
| Your analogy would hold if when your cert expires,
| everyone gets to spoof it consequence free.
| freeone3000 wrote:
| The cert can still be broken. The signatures are
| difficult, but not impossible, to break: it can and has
| been done with much older certificates, which means it
| will likely be doable to current certificates in a few
| years. In addition, certificate rotation allows for
| mandatory algorithm updates and certificate transparency
| logs. CT itself has exposed a few actors breaking the
| CA/B rules by backdating certificates with weaker
| encryption standards.
|
| Certificate expiration, and cryptographic key rotation in
| general, works and is useful.
| Jerrrrrrry wrote:
| apple cider to apples
|
| https://leahneukirchen.org/blog/archive/2019/10/ken-
| thompson...
|
| [previous hn discussion]:
| https://news.ycombinator.com/item?id=21202905
| marshray wrote:
| Much of the advice around passwords comes from time-sharing
| systems and predates the internet.
|
| Rules like "don't write passwords down," "don't show them on
| the screen", and "change them every N days" all make a lot
| more sense if you're managing a bank branch open-plan office
| with hardwired terminals.
| 01HNNWZ0MV43FF wrote:
| It's funny, writing passwords down is excellent advice on
| today's Internet.
|
| Physical security is easy, who's gonna see inside your
| purse? How often does that get stolen? Phones and laptops
| are high-value targets for thieves, and if they're online
| there could be a vuln. Paper doesn't have RCE.
|
| (That said, I use KeePass, because it's nice to have it
| synced and encrypted. These days only my KeePass password
| is written down.)
| izacus wrote:
| Based on the type of this rant - all security focused with
| little thought about usability of systems they're talking about
| - the author would probably be one of those people that mandate
| password rotation every week with minimum of 18 characters to
| "design systems safely by defatult". Oh, and prevent keyboards
| from working because they can infect computers via USB or
| something.
|
| (Yes, I'm commenting on the wierd idea about not allowing
| processes to run without asking - we're now learning from
| mobile OSes that this isn't practically feasible to build a
| universally useful OS that drove most of computer growth in the
| last 30 years).
| red_admiral wrote:
| I was going to say passwords too ... but now I think passkeys
| would be a better candidate for dumbest ideas. For the average
| user, I expect they will cause no end of confusion.
| JJMcJ wrote:
| > people writing passwords down
|
| Which is better, a strong password written down, or better yet
| stored a secured password manager, or a weak password committed
| to memory?
|
| As usual, XKCD has something to say about it:
| https://xkcd.com/936/
| cwbrandsma wrote:
| Penatrate and Patch: because if it doesn't work the first time
| then throw everything away, fire the developers, hire new ones,
| and start over completely.
| Zak wrote:
| I'd drop "hacking is cool" from this list and add "trusting the
| client".
|
| I've seen an increase in attempts to trust the client lately,
| from mobile apps demanding proof the OS is unmodified to Google's
| recent attempt to add similar DRM to the web. If your network
| security model relies on trusting client software, it is broken.
| billy99k wrote:
| This is a mostly terrible 19-year old list.
|
| Here is an example:
|
| "Your software and systems should be secure by design and should
| have been designed with flaw-handling in mind"
|
| Translation: If we lived in a perfect world, everything would be
| secure from the start.
|
| This will never happen, so we need to utilize the find and patch
| technique, which has worked well for the companies that actually
| patch the vulnerabilities that were found and learn from their
| mistakes for future coding practices.
|
| The other problem is that most systems are not static. It's not
| release a secure system and never update it again. Most
| applications/systems are updated frequently, which means new
| vulnerabilities will be introduced.
| CookieCrisp wrote:
| I agree, an example that if you say something dumb with enough
| confidence a lot of people will think it's smart.
| dartos wrote:
| The CTO at my last company was like this.
|
| In the same breath he talked about how he wanted to build
| this "pristine" system with safety and fault tolerance as
| priority and how he wanted to use raw pointers to shared
| memory to communicate between processes which both use
| multiple threads to read/write to this block of shared memory
| because he didn't like how chatty message queues are.
|
| He also didn't want to use a ring buffer since he saw it as a
| kind of lock
| SoftTalker wrote:
| That sounds pretty deep in the weeds for a CTO. Was it a
| small company?
| dartos wrote:
| It was. I was employee number 10. Just started and was
| entirely bankrolled by that CTO
|
| The CTO sold a software company he bootstrapped in 2008
| and afaik has been working as an exec since.
|
| The CEO, a close friend of Mr CTO, said that the system
| was going to be Mr CTO's career encore. (Read: they were
| very full of themselves)
|
| The CIO quit 4 days before I started for, rumor has it,
| butting heads with the CTO.
|
| Mr CTO ended up firing (with no warning) me and another
| dev who were vocal about his nonsense. (Out of 5 devs
| total)
|
| A 3rd guy quit less than a month after.
|
| That's how my 2024 started
| marshray wrote:
| I've had the CTO who was also a frustrated lock-free data
| structure kernel driver developer too.
|
| Fun times.
| dartos wrote:
| I forgot to mention that we were building all this in C#,
| as mandated by Mr CTO.
|
| He also couldn't decide between windows server and some
| .RHEL or Debian flavor
|
| I doubt this guy even knew what a kernel driver was.
|
| He very transparently just discounted anything he didn't
| already understand. After poorly explaining why he didn't
| like ring buffers, he said we should take inspiration
| from some system his friend made.
|
| We started reading over the system and it all hinged on a
| "CircularBuffer" class which was a ring buffer
| implementation.
| utensil4778 wrote:
| Okay, that would be a normal amount of bonkers thing to
| suggest in C or another language with real pointers.
|
| But in C#, that is a batshit insane thing to suggest. I'm
| not even sure if it's even legal in C# to take a pointer
| to an arbitrary address outside of your memory. That's..
| That's just not how this works. That's not how any of
| this works!
| neonsunset wrote:
| It is legal to do so. C# pointers == C pointers, C#
| generics with struct arguments == Rust generics with
| struct (i.e. not Box<dyn Trait>) arguments and are
| monomorphized in the same way.
|
| All of the following works: byte* stack
| = stackalloc byte[128]; byte* malloc =
| (byte*)NativeMemory.Alloc(128); byte[] array =
| new byte[128]; fixed (byte* gcheap = array)
| { // work with pinned object memory }
|
| Additionally, all of the above can be unified with
| (ReadOnly)Span<byte>: var stack =
| (stackalloc byte[128]); // Span<byte> var literal
| = "Hello, World"u8; // ReadOnlySpan<byte> var
| malloc = NativeMemory.Alloc(128); // void* var
| wrapped = new Span<byte>(malloc, 128); var gcheap
| = new byte[128].AsSpan(); // Span<byte>
|
| Subsequently such span of bytes (or any other T) can be
| passed to pretty much anything e.g. int.Parse,
| Encoding.UTF8.GetString, socket.Send,
| RandomAccess.Write(fileHandle, buffer, offset), etc. It
| can also be sliced in a zero-cost way. Effectively, it is
| C#'s rendition of Rust's &[T], C++ has pretty much the
| same and names it std::span<T> as well.
|
| Note that (ReadOnly)Span<T> internally is `ref T
| _reference` and `int _length`. `ref T` is a so-called
| "byref", a special type of pointer GC is aware of, so
| that if it happens to point to object memory, it will be
| updated should that object be relocated by GC. At the
| same time, a byref can also point to any non-GC owned
| memory like stack or any unmanaged source (malloc, mmap,
| pinvoke regular or reverse - think function pointers or C
| exports with AOT). This allows to write code that uses
| byref arithmetics, same as with pointers, but without
| having to pin the object retaining the ability to
| implement algorithms that match hand-tuned C++ (e.g. with
| SIMD) while serving all sources of sequential data.
|
| C# is a language with strong low-level capabilities :)
| sulandor wrote:
| though, frequent updates mainly serve to hide unfit engineering
| practices and encourage unfit products.
|
| the world is not static, but most things have patterns that
| need to be identified and handled, which takes time that you
| don't have if you sprint from quick-fix to quick-fix of your
| mvp.
| Pannoniae wrote:
| *Translation: If we didn't just pile on dependencies upon
| dependencies, everything would be secure from the start.
|
| Come on. The piss-poor security situation _might_ have
| something to do with the fact that the vast majority of
| software is built upon dependencies the authors didn 't even
| look at...
|
| Making quality software seems to be a lost art now.
| worik wrote:
| > Making quality software seems to be a lost art now
|
| No it is not. Lost that is
|
| Not utilised enough....
| worik wrote:
| > This is a mostly terrible 19-year old list.
|
| This is an excellent list that is two decades over due, for
| some
|
| > software and systems should be secure by design
|
| That should be obvious. But nobody gets rich except by adding
| features, so this needs to be said over and over again
|
| > This will never happen, so we need to utilize the find and
| patch technique,
|
| Oh my giddy GAD! It is up to *us* to make this happen. Us. The
| find and patch technique does not work. Secure by design does
| work. The article had some good examples
|
| > Most applications/systems are updated frequently, which means
| new vulnerabilities will be introduced.
|
| That is only true when we are not allowed to do our jobs. When
| we are able to act like responsible professionals we can build
| secure software.
|
| The flaw in the professional approach is how to get over the
| fact that features sell now, for cash, and building securely
| adds (a small amount of) cost for no visual benefit
|
| I do not have a magic wand for that one. But we could look to
| the practices of civil engineers. Bridges do collapse, but they
| are not as unreliable as software
| ang_cire wrote:
| > The flaw in the professional approach is how to get over
| the fact that features sell now, for cash, and building
| securely adds (a small amount of) cost for no visual benefit
|
| Because Capitalism means management and shareholders only
| care about stuff that does sell now, for cash.
|
| > But we could look to the practices of civil engineers
|
| If bridge-building projects were expected to produce profit,
| and indeed _increasing_ profit over time, with civil
| engineers making new additions to the bridges to make them
| more exciting and profitable, they 'd be in the same boat we
| are.
| msla wrote:
| It's also outright stupid. For example, from the section about
| hacking:
|
| > "Timid people could become criminals."
|
| This fully misunderstands hacking, criminality, and human
| nature, in that criminals go where the money is, you don't need
| to be a Big Burly Wrestler to point a gun at someone and get
| all of their money at the nearest ATM, and you don't need to be
| Snerd The Nerd to Know Computers. It's a mix of idiocy straight
| out of the stupidest 1980s comedy films.
|
| Also:
|
| > "Remote computing freed criminals from the historic
| requirement of proximity to their crimes."
|
| This is so blatantly stupid it barely bears refutation. What
| does this idiot think mail enables? We have Spanish Prisoner
| scams going back centuries, and that's the same scam as the one
| the 419 mugus are running.
|
| Plus:
|
| > Anonymity and freedom from personal victim confrontation
| increased the emotional ease of crime, i.e., the victim was
| only an inanimate computer, not a real person or enterprise.
|
| Yeah, criminals will defraud you (or, you know, beat the shit
| out of you and threaten to kill you if you don't empty your
| bank accounts) just as easily if they can see your great, big
| round face. It doesn't matter. They're criminals.
|
| Finally, this:
|
| > Your software and systems should be secure by design and
| should have been designed with flaw-handling in mind.
|
| "Just do it completely right the first time, idiot!" fails to
| be an actionable plan.
| strangecharm2 wrote:
| I didn't think "write secure software" would be controversial,
| but here we are. How is the nihilist defeatism going? I'll get
| back to you after I clean up the fallout from having my data
| leaked yet again this week.
| izacus wrote:
| Trying to do security wrong often leads to much worse
| outcomes for data leakage than not doing it optimally. It's
| counter intuitive, but a lot of things in security are such.
| ang_cire wrote:
| No one is _objecting_ to writing secure software, but saying
| "just do it" is big "draw the rest of the owl" energy. It's
| hard to do even for small-medium programs, nevermind
| enterprise-scale ones with 100+ different components all
| interacting with each other.
| ongy wrote:
| It's not controversial to write secure software.
|
| Saying that it's what should be fine is useless. Since it's
| not instructive.
|
| Don't fix implementation issues because that just papers over
| design issues? Great. Now we just need a team that never
| makes mistakes in design. And then a language that doesn't
| allow security issues outside business logic.
| duskwuff wrote:
| Steelmanning for a moment: I think what the author is trying to
| address is overly targeted "patches" to security
| vulnerabilities which fail to address the faulty design
| practices which led to the vulnerabilities. An example might be
| "fixing" cross-site scripting vulnerabilities in a web
| application by blocking requests containing keywords like
| "script" or "onclick".
| CM30 wrote:
| I think the main problem is that there's usually an unfortunate
| trade off between usability and security, and most of the issues
| mentioned as dumb ideas here come from trying to make the system
| less frustrating for your average user at the expense of
| security.
|
| For example, default allow is terrible for security, and the
| cause of many issues in Windows... but many users don't like the
| idea of having to explicitly permit every new program they
| install. Heck, when Microsoft added that confirmation, many
| considered it terrible design that made the software way more
| annoying to use.
|
| 'Default Permit', 'Enumerating Badness' and 'Penetrate and Patch
| ' are all unfortunately defaults because of this. Because people
| would rather make it easier/more convenient to use their
| computer/write software than do what would be best for security.
|
| Personally I'd say that passwords in general are probably one of
| the dumbest ideas in security though. Like, the very definition
| of a good password likely means something that's hard to
| remember, hard to enter on devices without a proper keyboard, and
| generally inconvenient for the user in almost every way. Is it
| any wonder that most people pick extremely weak passwords, reuse
| them for most sites and apps, etc?
|
| But there's no real alternative sadly. Sending links to email
| means that anyone with access to that compromises everything,
| though password resets usually mean the same thing anyway.
| Physical devices for authentication mean the user can't log in
| from places outside of home that they might want to login from,
| or they have to carry another trinket around everywhere. And
| virtually everything requires good opsec, which 99.9% of the
| population don't really give a toss about...
| bpfrh wrote:
| meh passwords where a good idea for a long time.
|
| The first 10(20?) years there where no devices without a good
| keyboard.
|
| The big problem imho was the idea that passwords had to be
| complicated and long, e.g. a random, alpanumeric, some special
| chars and at least 12 characters long, while a better solution
| would have been a few words.
|
| Edit: To be clear I agree with most of your points about
| passwords, just wanted to point out that we often don't
| appreciate how much tech changed after the smartphone
| introduction and that for the environemnt before that
| (computer/laptops) passwords where a good choice.
| CM30 wrote:
| That's a fair point. Originally devices generally had decent
| keyboards, or didn't need passwords.
|
| The rise of not just smartphones, but tablets, online games
| consoles, smart TVs, smart appliances, etc had a pretty big
| impact on their usefulness.
| freeone3000 wrote:
| It's that insight that brought forward passkeys, which have
| elements of SSO and 2FA-only logins. Apple has fully
| integrated, allowing cloud-sync'd passkeys: on-device for apple
| devices, 2FA-only if you've got an apple device on you. Chrome
| is also happy to act as a passkey. So's BitWarden. It can't be
| spoofed, can't be subverted, you choose your provider, and you
| don't even have to remember anything because the site can give
| you the name of the provider you registered with.
| tonnydourado wrote:
| I've seen "Penetrate and Patch" play out a lot on software
| development in general. When a new requirement shows up, or
| technical debt starts to grow, or performance issues, the first
| instinct of a lot of people is to try and find the smallest,
| easiest possible change to achieve the immediate goal, and just
| move to the next user story.
|
| That's not a bad instinct by itself, but when it's your only
| approach, it leads to a snowball of problems. Sometimes you have
| to question the assumptions, to take a step back and try to
| redesign things, or new addition just won't fit, and the system
| just become wonkier and wonkier.
| mrbluecoat wrote:
| > sometime around 1992 the amount of Badness in the Internet
| began to vastly outweigh the amount of Goodness
|
| I'd be interested in seeing a source for this. Feels a bit
| anecdotal hyperbole.
| julesallen wrote:
| It's a little anecdotal as nobody was really writing history
| down at that point but it feels about the right timing.
|
| The first time the FTP server I ran got broken into was about
| then, it was a shock as why would some a-hole want to do that?
| I wasn't aware until one of my users tipped me off a couple of
| days after the breach. They were sharing warez rather than porn
| at least, having the bandwidth to download even crappy postage
| stamp 8 bit color videos back then would take you hours.
|
| When this happened I built the company's first router a few
| days later and put everything behind it. Before that all the
| machines that needed Internet access would turn on TCP/IP and
| we'd give them a static IP from the public IP range we'd
| secured. Our pipe was only 56k so if you needed it you had to
| have a really good reason. No firewall on the machines. Crazy,
| right?
|
| Very different times for sure.
| al2o3cr wrote:
| My guess is that this will extend to knowing not to open weird
| attachments from strangers.
|
| I've got bad news for ya, 2005... :P
| kstrauser wrote:
| Hacking _is_ cool. Well, gaining access to someone else 's data
| and systems _is not_. Learning a system you own so thoroughly
| that you can find ways to make it misbehave to benefit you _is_.
| Picking your neighbor 's door lock is uncool. Picking your own is
| cool. Manipulating a remote computer to give yourself access you
| shouldn't have is uncool. Manipulating your own to let you do
| things you're not suppose to be able to is cool.
|
| That exploration of the edges of possibility is what make moves
| the world ahead. I doubt there's ever been a successful human
| society that praised staying inside the box.
| janalsncm wrote:
| We can say that committing crimes is uncool but there's
| definitely something appealing about _knowing how_ to do
| subversive things like pick a lock, hotwire a car, create
| weapons, or run John the Ripper.
|
| It effectively turns you into a kind of wizard, unconstrained
| by the rules everyone else believes are there.
| kstrauser wrote:
| Well put. There's something inherently cool in knowledge
| you're not suppose to have.
| jay_kyburz wrote:
| You have to know how to subvert existing security in order
| to build better secure systems. You _are_ supposed to know
| this stuff.
| kstrauser wrote:
| Some people think we shouldn't because "what if criminals
| also learn it?" Uh, they already know the best
| techniques. You're right: you and I need to know those
| things, too, so we can defend against them.
| teleforce wrote:
| This article is quite old and has been submitted probably every
| year since it's published with past submissions well into double
| pages.
|
| For modern version and systematic treatment of the subject check
| out this book by Spafford:
|
| Cybersecurity Myths and Misconceptions: Avoiding the Hazards and
| Pitfalls that Derail:
|
| https://www.pearson.com/en-us/subject-catalog/p/cybersecurit...
| nottorp wrote:
| > #4) Hacking is Cool
|
| Hacking _is_ cool. Why the security theater industry has
| appropriated "hacking" to mean accessing other people's systems
| without authorization, I don't know.
| Kamq wrote:
| > Why the security theater industry has appropriated "hacking"
| to mean accessing other people's systems without authorization,
| I don't know.
|
| A lot of early remote access was done by hackers. Same with
| exploiting vulnerabilities.
|
| One of my favorite is the Robin Hood worm:
| https://users.cs.utah.edu/~elb/folklore/xerox.txt
|
| TL;DR: Engineers from Motorola exploited a vulnerability
| illustrate it, and they did so in a humorous way. Within the
| tribe of hackers, this is pretty normal, but the only
| difference between that and stealing everything once the
| vulnerability has been exploited is intent.
|
| Normies only hear about the ones where people steal things.
| They don't care about the funny kind.
| kstrauser wrote:
| From Britannia: https://www.britannica.com/topic/hacker
|
| > Indeed, the first recorded use of the word _hacker_ in print
| appeared in a 1963 article in MIT's _The Tech_ detailing how
| hackers managed to illegally access the university's telephone
| network.
|
| I get what you're saying, but I think we're tilting at
| windmills. If "hacker" has a connotation of "breaking in" for
| 61 years now, then the descriptivist answer is to let it be.
| jibe wrote:
| "We're Not a Target" deserves promotion to major.
| jrm4 wrote:
| Missed the most important.
|
| "You can have meaningful security without skin-in-the-game."
|
| This is literally the beginning and end of the problem.
| dasil003 wrote:
| I was 5 years into a professional software career when this was
| written, at this point I suspect I'm about the age of the author
| at the time of its writing. It's fascinating to read this now and
| recognize the wisdom coming from experience honed in the 90s and
| the explosion of the internet, but also the cultural gap from the
| web/mobile generation, and how experience doesn't always
| translate to new contexts.
|
| For instance, the first bad idea, Default Permit, is clearly bad
| in the realm of networking. I might quibble a bit and suggest
| Default Permit isn't so much an idea as the natural state of when
| one invents computer networking. But clearly Default Deny was a
| very very good idea and critical idea necessary for the
| internet's growth. It makes a lot of sense in the context of
| global networking, but it's not quite as powerful in other
| security contexts. For instance, SELinux has never really taken
| off, largely because it's a colossal pain in the ass and the
| threat models don't typically justify the overhead.
|
| The other bad idea that stands out is "Action is Better Than
| Inaction". I think this one shows a very strong big company /
| enterprise bias more than anything else--of course when you are
| big you have more to lose and should value prudence. And yeah,
| good security in general is not based on shiny things, so I don't
| totally fault the author. That said though, there's a reason that
| modern software companies tout principles like "bias for action"
| or "move fast and break things"--because software is malleable
| and as the entire world population shifted to carrying a
| smartphone on their person at all times, there was a huge land
| grab opportunity that was won by those who could move quickly
| enough to capitalize on it. Granted, this created a lot of
| security risk and problems along the way, but in that type of
| environment, adopting a "wait-and-see" attitude can also be an
| existential threat to a company. At the end of the day though, I
| don't think there's any rule of thumb for whether action vs
| inaction is better, each decision must be made in context, and
| security is only one consideration of any given choice.
| worik wrote:
| This true in a very small part of our business
|
| Most of us are not involved in a "land grab", in that
| metaphorical world most of us are past "homesteading " and are
| paving roads and I filling infrastructure
|
| Even small companies should take care when building
| infrastructure
|
| "Go fast and break things" is, was, an irresponsible bad idea.
| It made Zuck rich, but that same hubris and arrogance is
| bringing down the things he created
| michaelmrose wrote:
| > Educating Users
|
| This actually DOES work it just doesn't make you immune to
| trouble any more than a flu vaccine means nobody gets the flu. If
| you drill shit into people's heads and coach people who make
| mistakes you can decrease the number of people who do dumb
| company destroying behaviors by specifically enumerating the
| exact things they shouldn't do.
|
| It just can't be your sole line of defense. For instance if Fred
| gives his creds out for a candy bar and 2FA keeps those creds
| from working and you educate and or fire Fred not only did your
| second line of defense succeed your first one is now stronger
| without Fred.
| AtlasBarfed wrote:
| #1) Great, least secure privilege, oh wait, HOW LONG DOES IT TAKE
| TO OPEN A PORT? HOW MANY APPROVALS? HOW MANY FORMS?
|
| Least secure privilege never talks about how security people in
| charge of granting back the permissions do their jobs at an
| absolute sloth pace.
|
| #2) Ok, what happens when goodness becomes badness, via exploits
| or internal attacks? How do you know when a good guy becomes
| corrupted without some enumeration of the behavior of infections?
|
| #3) Is he arguing to NOT patch?
|
| #4) Hacking will continue to be cool as long as modern
| corporations and governments are oppressive, controlling,
| invasive, and exploitative. It's why Hollywood loves the Mafia.
|
| #5) ok, correct, people are hard to train at things they really
| don't care about. But "educating users", if you squint is
| "organizational compliance". You know who LOVES compliance
| checklists? Security folks.
|
| #6) Apparently, there are good solutions in corporate IT, and all
| new ones are bad.
|
| I'll state it once again: my #1 recommendation to "security"
| people is PROVIDE SOLUTIONS. Parachuting in with compliance
| checklists is stupid. PROVIDE THE SOLUTION.
|
| But security people don't want to provide solutions, because they
| are then REALLY screwed when inevitably the provided solution
| gets hacked. It's way better to have some endless checklist and
| shrug if the "other" engineers mess up the security aspects.
|
| And by PROVIDE SOLUTIONS I don't mean "offer the one solution for
| problem x (keystore, password management), and say fuck you if
| someone has a legitimate issue with the system you picked". If
| you can't provide solutions to various needs of people in the
| org, you are failing.
|
| Corporate Security people don't want to ENGINEER things, again
| they just want to make compliance powerpoints to C-suite execs
| and hang out in their offices.
| bawolff wrote:
| > If you're a security practitioner, teaching yourself how to
| hack is also part of the "Hacking is Cool" dumb idea. Think about
| it for a couple of minutes: teaching yourself a bunch of exploits
| and how to use them means you're investing your time in learning
| a bunch of tools and techniques that are going to go stale as
| soon as everyone has patched that particular hole.
|
| I would strongly disagree with that.
|
| You can't defend against something you don't understand.
|
| You definitely shouldn't spend time learning some script-kiddie
| tool, that is pointless. You should understand how exploits work
| from first principles. The principles mostly won't change or at
| least not very fast, and you need to understand how they work to
| make systems resistant to them.
|
| One of the worst ideas in computer security in my mind is cargo
| culting - where people just mindlessly repeat practises thinking
| it will improve security. Sometimes they don't work because they
| have been taken out of their original context. Other times they
| never made sense in the first place. Understanding how exploits
| work stops this.
| strangecharm2 wrote:
| True security can only come from understanding how your system
| works. Otherwise, you're just inventing a religion, and doing
| everything on faith. "We're fine, we update our dependencies."
| Except you have no idea what's in those dependencies, or how
| they work. This is, apparently, a controversial opinion now.
| grahar64 wrote:
| "Educating users ... If it worked, it would have worked by now"
| ang_cire wrote:
| It does work, but user training is something that- whether for
| security or otherwise- is a continuous process. New hires.
| Training on new technologies. New operating models. etc etc
| etc...
|
| IT is not static; there is no such thing as a problem that the
| entire field solves at once, and is forever afterward gone.
|
| When you're training an athlete, you teach them about fitness
| and diet, which underpins their other skills. And you _keep_
| teaching and updating that training, even though "being fit"
| is ancillary to their actual job (i.e. playing football,
| gymnastics, etc). Pro athletes have full-time trainers, even
| though a layman might think, "well haven't they learned how to
| keep themselves fit by now?"
| woodruffw wrote:
| Some of this has aged pretty poorly -- "hacking is cool" has, in
| fact, largely worked out for the US's security community.
| janalsncm wrote:
| > hacking is cool
|
| Hacking will always be cool now that there's an entire aesthetic
| around it. The Matrix, Mr. Robot, even the Social Network.
| chha wrote:
| > If you're a security practitioner, teaching yourself how to
| hack is also part of the "Hacking is Cool" dumb idea. Think about
| it for a couple of minutes: teaching yourself a bunch of exploits
| and how to use them means you're investing your time in learning
| a bunch of tools and techniques that are going to go stale as
| soon as everyone has patched that particular hole.
|
| If only this was true... Injection has been on Owasp Top 10 since
| its inception, and is unlikely to go away anytime soon. Learning
| some techniques can be useful just to do quick assessments of
| basic attack vectors, and to really understand how you can
| protect yourself.
| tracerbulletx wrote:
| Hacking is cool.
___________________________________________________________________
(page generated 2024-07-14 23:00 UTC)