[HN Gopher] Sega Europe suffers major security breach
___________________________________________________________________
Sega Europe suffers major security breach
Author : aaronwp
Score : 170 points
Date : 2021-12-30 16:57 UTC (6 hours ago)
(HTM) web link (vpnoverview.com)
(TXT) w3m dump (vpnoverview.com)
| ff7c11 wrote:
| By temporarily defacing the Sega website and modifying files I
| think they have crossed the line. Enumerating what access they
| have, rooting through S3 and reporting it is OK, but by messing
| around like script kiddies they can no longer claim good faith.
| Publicising that you've illegally defaced the website is a little
| silly. Of course, Sega should not have got themselves so
| completely owned. Sega deserved to be punished, but these VPN
| twits have clearly committed a crime and Sega should maybe sue
| their company.
| walrus01 wrote:
| it seems like there's a couple of hundred consumer-facing VPN
| service providers, all with slick looking marketing websites to
| sell you a $5/mo service.
|
| lots of them are nothing more than 1 or 2 people and some
| rented 1U servers or dedicated servers somewhere on whatever
| ISP that can find with cheap IP transit / DIA rates. maybe a
| part time website design/graphic arts person they found via
| fiverr to make things look cool.
|
| from the perspective of a colocation-specialist ISP or medium
| sized generalist ISP that offers colo, they get lots of weird
| requests for colo and dedicated server services from VPN
| companies they've never heard of before. often with something
| like a corporate entity that exists in cyprus, panama or even
| weirder places.
|
| looking at this in terms of the risk that a VPN provider
| presents to an ISP's reputation, IP space, attracting unusual
| volumes and numbers of DDoS, etc... there is a certain amount
| of "KYC" (exact same idea as finance industry KYC) that needs
| to go into a potential vpn service provider as a colocation
| client before quoting them a price or accepting them as a
| customer. fail to do that at your own risk.
|
| it's very much in the weird/shady/grey market end of the ISP
| market.
|
| the level of technical acumen and professionalism varies
| greatly between VPN providers.
| tomrod wrote:
| Who are reputable in the space?
| ricardobayes wrote:
| protonmail
| walrus01 wrote:
| mullvad, the company mozilla recently partnered with.
|
| not much else...
|
| I am biased because I do my own VPN so all of them seem
| shady to me.
| Enginerrrd wrote:
| Tangential to the thread, but I've never understood what
| people mean when they say this.
|
| Do you run all your personal traffic through a VPS or
| something? That's not really offering the same thing as
| most VPN's. It hides your traffic from your ISP so they
| can't sell your data and snoop on you, but doesn't
| accomplish some of the anonymizing that an actual multi-
| user VPN can provide by adding additional traffic under
| the same IP.
|
| So, what do YOU mean when you say you "do your own VPN"?
| walrus01 wrote:
| One of the VMs that I have on a system in colocation is
| my own customized OpenVPN setup, where I also run the
| openssl CA for it. My phone, laptop, etc all have their
| own keys.
|
| It's set up for my own needs when I want to use a VPN
| from a weird place. Or simply to bypass artificial
| restrictions on traffic if I'm on amenity wifi in
| somebody's office, airport, hotel, etc. Since I can
| arbitrarily reconfigure it at will, and run multiple
| openvpn daemons from differnt .conf files listening on
| different ports with unique configurations (all relying
| on the same CA), I can do things like have one VPN that
| pushes a default route for my spouse's need to do
| internet things on restricted amenity wifi.
|
| Another part of it pushes only routes to a few /24 that
| are my personal project servers, and the routing table on
| vpn clients remains otherwise unmodified. Sometimes known
| as a split horizon VPN.
|
| >95% of the time I am not using it to run all my traffic
| through there.
|
| It's also the gateway and pushes routing table entries to
| things that exist for my personal
| test/project/development VMs that are in private IP
| space, so I need to be connected to the VPN in order to
| talk to those.
| __turbobrew__ wrote:
| Seconded mullvad. The only vpn provider which accepts
| cash by mail as a payment method.
|
| No email needed for sign up either.
| foldr wrote:
| >Sega deserved to be punished
|
| I don't understand this way of thinking. They made a serious
| security oversight, but that doesn't mean that they deserve to
| have their website defaced.
| nulbyte wrote:
| > Sega deserved to be punished, but these VPN twits have
| clearly committed a crime
|
| I think the rest of the sentence makes it clear the author
| didn't intend to support defacement as punishment.
| beckman466 wrote:
| dragontamer wrote:
| > Sega deserved to be punished
|
| The store owner was gone on vacation, and thus the side of his
| store was riddled with graffiti. He deserved to get graffiti
| because he didn't take basic security precautions.
| Fnoord wrote:
| You don't need to break security to spray the side of a
| store. You do need to break security to deface a website.
|
| Analogies are analogies, they're unnecessary in this case
| (nowadays). Because we got law to punish people who deface a
| website, and the law stands on its own.
|
| Its akin to people who call 'copyright infringement' 'theft'.
| Its not the same, its a different mechanic, damages are
| different, and... different laws apply. That doesn't mean
| one's right or wrong or anything like it; like I said: the
| laws stand on their own, respectively.
| vrgutbdur wrote:
| 123pie123 wrote:
| they don't deserve to get graffiti, but it is expected
|
| they should be punished by legal means (legal proceedings or
| lawsuits) and by reputational damage
| throwawaygh wrote:
| I think "deserves" is a better word than "deserved".
|
| The punishment for grossly negligent handling of PII should
| not be a childish website defacement, and should not be from
| enforced by vigilantes. Obviously.
|
| The punishment for mishandling PII like this should be a
| painful fine, a rigorous externally imposed technical audit,
| and possibly civil/criminal implications for senior
| leadership.
|
| (If the last one sounds unreasonable, consider Equifax. Many
| executives in charge of security orgs do not have technical
| degrees and, more importantly, have not booked any time in
| the trenches. Being self-taught and having non-engineering
| degrees can be okay, but combining that with no in-the-
| trenches experience is inexcusable. Assignment security to
| corporate politicians who don't understand the work that they
| are managing should be criminally negligent.)
| wwtrv wrote:
| To me this situation seems more like a store owner forgetting
| to lock the door the somebody noticed, came inside put up a
| sign on the front window saying that the store owner is too
| stupid to lock his own door and then calling the owner to
| tell him about this.
| EGreg wrote:
| So the store owner can just leave all his customers' credit
| card information lying around and ignore PCI compliance etc.
| because anyone who would possibly use it for nefarious
| purposes is a criminal?
|
| How would you prevent such negligence
| charlesju wrote:
| Two wrongs don't make a right
| dragontamer wrote:
| > How would you prevent such negligence
|
| The ones who are damaged by the negligence sues for
| negligence.
|
| Similarly: those people who act recklessly can get sued for
| more, or even criminally prosecuted. Finally, someone who
| acts out with malicious intent can be sued / criminally
| charged with the highest crimes.
|
| -----------
|
| So in this "Sega" case: Sega can sue their security for the
| negligence.
|
| Then, the hackers can be sued for something between
| recklessness and malicious intent.
|
| Yeah, the law is flexible. "Justice" as a concept in the
| Western world revolves around both actions + intent. (With
| intent / state of mind in roughly 3 states: negligence,
| recklessness, and malice in that order).
|
| Its a flexible system, albeit sometimes imperfect... but
| just applying it in a textbook manner to this case results
| in acceptable results IMO.
| burnished wrote:
| Strong disagree (not about the law claims, I'll leave that to
| the law-knowers), but the moral implications of 'crossing a
| line'. It reads like they revealed security vulnerabilities
| that had the possibility to harm others. I think they can be
| allowed some leeway in their methods.
| throwoutway wrote:
| Nope. That can come after responsible disclosure. Did they
| try the responsible path first? Looks like they notified and
| then kept going for another 10 days
| kiklion wrote:
| > By temporarily defacing the Sega website
|
| I may have missed it but what did they deface?
|
| I see a proof of script execution in what appears to be an
| uploaded file of a random string of letters and numbers .htm
| address.
|
| So if don't correctly there is a near zero chance of any public
| user stumbling into the site.
| whoknew1122 wrote:
| They clearly said they modified careers.sega.co.uk and posted
| a screenshot of the careers site displaying vpnoverview's
| logo (https://vpnoverview.com/wp-content/uploads/screenshot-
| about-...)
| vlunkr wrote:
| They say it "briefly" showed that logo. Who knows how long
| that is.
| whoknew1122 wrote:
| The question was whether a site was defaced, not how long
| it was defaced for.
| rosndo wrote:
| It's hilarious to see people generating content like this to push
| their VPN affiliate marketing schemes.
| walrus01 wrote:
| If there's one thing that I can absolutely rely upon, it's for
| VPN service providers to use any and every form of shady grey
| marketing sales technique that exists.
| ipaddr wrote:
| Another grow marketing hack successful. Double if their is a
| lawsuit.
| batch12 wrote:
| So the breach referenced was a breach by the researchers, not a
| malicious third party (that we know of)? I would have called it
| exposure or a vulnerability since breach has a specific meaning
| that I am not sure this fits. Maybe I am being pedantic.
| thr0wawayf00 wrote:
| "Breach" is a legal term, and although IANAL, it seems
| semantically correct here. When anyone outside of your
| organization gains access to sensitive information in your
| systems, regardless of their intent, that is a breach and these
| guys accomplished that. PCI and all of those other security
| protocols and programs don't draw the line at white-hat access
| vs black-hat access.
| isbvhodnvemrwvn wrote:
| If you're running on AWS, why would you even have long-lived
| credentials in your images?
| whoknew1122 wrote:
| There's a few reasons, none of them good.
|
| Likely the answer is gross incompetence.
|
| If I were to give them the benefit of the doubt and provide the
| most defensible reason to have an image that contains AWS
| credentials, you could theoretically use long-term (i.e. user)
| AWS credentials on an on-premises VM and then export the server
| image to AWS. When you rehost the server in EC2, you would
| switch to an instance role per best practices. And then you
| forget to delete the image stored in S3.
|
| Still doesn't explain why the S3 bucket is publicly available.
| But that's one reason a server image with long-term credentials
| could end up stored in an S3 bucket.
|
| Unlikely that the image was an EBS snapshot or AMI. While those
| are technically housed in S3, you can't access them from the S3
| console. And they didn't brag about accessing the EC2 console.
| 0xbadcafebee wrote:
| A good example of how the _usability_ of your product directly
| affects security.
|
| AWS has multiple forms of credentials. IAM Users (static keys
| tied to a specific user identity) are one form. But you can also
| authenticate via SAML or OIDC. If you use SAML/OIDC, you can
| enforce temporary IAM credentials, audit who authenticated,
| expire credentials, enforce password rules & MFA, etc.
|
| Because IAM Users are the _easiest thing_ to set up, that 's what
| everyone does. And that leads to compromises. If, on the other
| hand, IAM Users were _more difficult_ to set up than SAML /OIDC,
| then everyone would use SAML/OIDC and temporary credentials. And
| that would mean giant compromises like these would be much rarer,
| because it would eliminate the easiest form of compromise: people
| putting static, non-expiring keys where they shouldn't be.
|
| So when you develop a thing, think about the consequences of it,
| and design it so that users are more inclined to use it in a way
| that leads to good outcomes. That might even mean making parts of
| it intentionally hard to use.
| watermelon0 wrote:
| When allowing 3rd parties to access your AWS resources, IAM
| keys are in most cases the only way to achieve this.
|
| For example, most CI/CD systems don't support OIDC yet, so you
| have to add IAM keys to them. GitHub Actions is a notable
| exception here.
| banana_giraffe wrote:
| Reminds me of this I stumbled across for ngrok:
|
| > Can I run my own ngrok server? > Yes, kind of. You may
| license a dedicated installation of the ngrok server cluster
| for commercial use. You provide us with keys to an AWS
| account and we will install the server cluster software into
| that account
|
| I have no idea how common this pattern is, but personally,
| the idea of giving someone else AWS creds that aren't _very_
| locked down scares me.
| josephcsible wrote:
| I never understood the point of self-hosting ngrok. Isn't
| its entire value proposition that it lets you borrow a
| stable public IP to host something with when you don't have
| one of your own?
| drjasonharrison wrote:
| This also occurs when your AWS resources need to access 3rd
| party services. Some services don't have temporary key
| support.
| isbvhodnvemrwvn wrote:
| If you need to access 3rd party services from your AWS
| account you can at the very least put those credentials
| into Secret Manager or SSM Parameter Store, so that your
| application retrieves them at runtime when it needs to - no
| need to store them with the app.
| dc-programmer wrote:
| If the third party has their own IAM users, you can create a
| cross-account trust relationship where you allow their IAM
| entity to assume a (scoped-down) role in your account. Then
| they are able to retrieve temporary credentials to assume
| this role.
| isbvhodnvemrwvn wrote:
| One reason people don't like doing this is that by assuming
| this role you lose all the privileges in your own account.
| It's not something you can't overcome (e.g. by using
| separate credential chains in different parts of the app),
| but people are lazy.
| politelemon wrote:
| If I'm understanding correctly, a whole bunch of credentials,
| like IAMs, DB passwords, Steam keys, and MailChimp keys were
| lying around in S3 buckets.
|
| But I don't understand the use case, what would be the purpose of
| uploading those details into S3 buckets? Or I suppose I'm trying
| to reverse engineer the situation where the dev/ops team decided
| to do this.
| isbvhodnvemrwvn wrote:
| Why they used users in the first place I don't know, but for
| IAM credentials - I've seen people using Terraform to generate
| the users and access keys, and storing the access keys in the
| terraform state (you can't access secret keys after they are
| generated), and the entire state of Terraform is typically
| stored in something like an S3 bucket.
|
| It's definitely not a great practice, but still it's done.
| drjasonharrison wrote:
| Rather than use a password manager, or credential store, or
| some other secure way to keep these credentials safe while
| providing access to internal developers for development
| purposes, they put them on S3.
|
| Here's an example I have seen: - env file is needed for
| development to run a service on development machine and to
| access the staging deployment - the credentials in the env file
| aren't per-developer because that requires work to setup
| accounts for every developer with the staging hosting service -
| so make a copy of the credentials, put them in an env file on
| the NAS - NAS isn't available from home or from other network
| locations - so make a copy of the env file in the cloud
|
| If the S3 bucket hadn't been public they probably would have
| been fine.
| grogenaut wrote:
| S3 keeps secrets out of source code, so you at least don't have
| to purge git history and can lock access down to "internal
| developers", and can relatively easily rotate the creds, just
| find everything in the creds bucket (instead of searching all
| your code).
|
| Handling of secrets has gone through many rapid iterations in
| the cloud lately since around 2013.
|
| For AWS: In Source. In a magic file that lives on build
| machine. In S3 with crypto at rest that you can pull when you
| boot your machine, or dynamo, or DB, just one boot password or
| IAM role to get you access to the rest. Then in Envvars for the
| service. Then Secrets manager / SSM Parameter store, more
| recently.
|
| Various organizations and pieces of software are somewhere
| along this curve. And the less cared for this software is (or
| even known about, people forget software), the further back on
| the curve it likely is.
|
| Beyond the above methods that is a more constant rotation
| behavior similar to Hashi Vault using SSM/Secrets manager. And
| a drive to require all systems to use constantly rotating
| credentials (no static creds). I'm not sure what comes after
| that.
|
| However what system you use is highly dependent on your
| organizational maturity and internal threat model.
| ljm wrote:
| One can only speculate but I can't imagine how many companies
| will avoid investing in security here, because they think that
| the secrets in their git repos and S3 buckets are perfectly
| safe, and they allow some people to skip 2FA because it's too
| inconvenient for them, and some people have root access on AWS
| because it's easier, etc. Maybe even giving the job to people
| who don't have much experience in the field and are still
| learning how to set up things in the cloud.
|
| A publicly accessible S3 bucket suggests that someone
| mistakenly thought it was private, even by obscurity.
| aaronwp wrote:
| Sega Europe left AWS S3 creds laying around in a server image on
| downloads.sega.com. I was able to use them to enumerate a bunch
| of storage, dig out more keys, and mock up a spear phishing
| attack against the Football Manager forums.
|
| All the keys and services are secure and the breach is closed.
| duxup wrote:
| > dig out more keys
|
| I guess that if they leave them lying around that it is likely
| there are more.
| imwillofficial wrote:
| This would be awesome as a blog post if you ever want to go
| into detail on how you executed each step.
| [deleted]
| vmception wrote:
| Should have just left it at that and collected the bug bounty,
| defacing for a proof of concept and telling everyone pretty
| much makes you ineligible in any white hat program. Can I get
| dibs on your flat while you're in the... camp?
| jsploit wrote:
| > I was able to use them to enumerate a bunch of storage, dig
| out more keys
|
| That's unethical and likely criminal without explicit testing
| authorization (which it appears you didn't have).
|
| I wonder if there are any examples of "researchers" being
| sued/prosecuted for stunts like this.
| phnofive wrote:
| Is it common, now or historically, to follow up a notification
| of compromise with self-directed PoC and privilege escalation
| exercises on the resources of a company with which you're not
| under contract? My naive take is that this was a series of
| well-intentioned but possibly criminal actions used to
| illustrate a lesson we could all be reminded of from time to
| time.
|
| Also, the HackerOne page doesn't appear to be claimed by SEGA
| Sammy, so notices might dead-end there as well.
| aaronwp wrote:
| Yes, if PII is involved it's common to run an audit like
| this. In addition to the access keys on the server image,
| Sega also accidentally published a database export containing
| PII. In order to write a comprehensive disclosure I have to
| investigate thoroughly.
|
| And yeah, there's no branding or information on HackerOne.
| Even if this had been in scope, I would have thought twice
| about submitting anything. Our publishing standards match
| HackerOne ethical disclosure standards.
| phnofive wrote:
| Did Sega agree to this public disclosure?
|
| Referring to the HackerOne standards, it appears your team
| violated a couple:
|
| > _Respect privacy._ Make a good faith effort not to access
| or destroy another user 's data.
|
| > _Do no harm._ Act for the common good through the prompt
| reporting of all found vulnerabilities. Never willfully
| exploit others without their permission.
| wwtrv wrote:
| Public disclosing it seems to clearly fall under the '
| Act for the common good through the prompt' since SEGA's
| user are the real victims in this situation and have the
| right to known that SEGA us incapable of keeping their
| data safe.
| tentacleuno wrote:
| > Even if this had been in scope, I would have thought
| twice about submitting anything.
|
| Sorry, I don't understand. Why would you be hesitant to
| responsibly disclose it to HackerOne?
| phnofive wrote:
| They didn't know about it beforehand, but even if you
| visit the page, it says that SEGA Sammy hasn't claimed
| it, so it appears unofficial.
| throwawaygh wrote:
| _> Is it common, now or historically_
|
| Historically: yes.
|
| Now: no.
|
| _> possibly criminal_
|
| Sans some sort of formal agreement (which platforms like
| HackerOne might facilitate), it's definitely criminal. (IMO
| at least not unethical, to be clear.)
|
| Again, sans some sort of contract either one-off or platform
| based. If SEGA wanted a prosecution, they would almost
| certainly be able to convince a prosecutor to press charges.
| The prosecutor would certainly get a guilty verdict. (Or,
| much more likely, a guilty plea with a bit of prison time and
| stiff probation.)
|
| This still happens from time to time in _much_ more ambiguous
| situations. E.g.,
| https://www.nytimes.com/2021/10/15/us/missouri-st-louis-
| post...
|
| Fortunately, there's a bit of a gentleman's detente among
| reasonable white hats and reasonable companies. But if you
| venture much outside of the small set of companies who rely
| on and have technologists in senior leadership, the story
| changes fast.
| floatingatoll wrote:
| That detente's boundaries may be somewhat vague and
| impossible to guarantee, but you can broad-brush paint
| yourself into a safer box with these four principles:
|
| - Don't make humiliating changes to their content
|
| - Don't mess with their userbase
|
| - Don't leave undocumented backdoors
|
| - Don't damage production
|
| If you do your best to comply with those principles, then
| you can make a strong argument to a judge/jury that your
| behavior was without malice, which will notably improve
| your chances of survival if someone decides the usual
| detente isn't palatable.
| kingcharles wrote:
| I used to do this white hat hacking back in the day: modify
| a page on the web server, send a link to the admin with the
| exploit walkthrough.
|
| It's a dangerous game to play now, though. You're basically
| betting the company you tested your PoC on would rather
| avoid the negative PR of filing charges against you, vs. a
| bunch of non-technical suits who just want to see you do
| 150 years in Sing Sing.
| ta3927590 wrote:
| Historically, definitely. Currently? Fairly common. However,
| what's both historically and currently uncommon is having the
| sense to not do so while also identifying yourself. For the
| h4x0r cred, or whatever. Which is of course childishly
| idiotic, but makes my job a whole lot easier. In my
| experience, if you're not under any such contract and even if
| you are going to report such a compromise in complete good
| faith and have done no damage, you are far better off doing
| so as anonymously as possible. Nobody likes to be
| embarrassed, and it's a lot simpler for a corporation with a
| stock price and public image to think about to pin the whole
| situation on those damn hackers than own up to even the
| slightest degree of incompetence. Typing at work in sort of a
| hurry so, please forgive grammatical issues.
| ta3927590 wrote:
| voakbasda wrote:
| Yup, this was totally criminal in most jurisdictions. I don't
| care if the person intended to help; this kind of vigilante
| hacking deserves to land you in prison.
|
| You want a bounty? Talk to me _before_ you break into my
| systems. Because once you do that without my permission, you
| have proven yourself completely unworthy of being trusted.
| Why should I believe that you have not installed a rootkit or
| other tech that you did not subsequently disclose?
|
| You will need to be treated the same as any other criminal.
| If my insurance gets involved, that also probably means
| directly assisting with an attempt at criminal prosecution.
|
| So, yeah, brilliant strategy. /s
| duxup wrote:
| How do you know there's a breach without seeing it?
| whoknew1122 wrote:
| How would Sega know there are AWS API keys in a public S3
| bucket without vpnoverview defacing their careers site?
| Sega could probably, y'know, look in the S3 bucket at the
| identified file which contained the keys.
|
| All of the things found could have been investigated by
| Sega and replicated if vpnoverview just documented how
| they got access to the info.
|
| You don't have to joyride in a car to show the owner that
| they dropped their keys.
| craftinator wrote:
| > You don't have to joyride in a car to show the owner
| that they dropped their keys.
|
| This is the most accurate analogy I've seen in months,
| thank you for sharing it!
| wwtrv wrote:
| In this case SEGA, due to their incompetence lost a bunch
| of car keys owned by other people despite claiming that
| they'll keep them safe (and having a legal obligation to
| do so under GDPR). So I don't see any problem with
| publicly exposing them.
| batch12 wrote:
| A vulnerability or a breach?
| voakbasda wrote:
| Not sure why my comment got downvoted, but it very much
| feels like HN is defending this kind of behavior. This is
| why we can't have nice things.
| rectang wrote:
| You can't have nice things because you aggressively
| criminalized the white hats, thus were never warned by
| them before a black hat took your nice things away.
|
| > _Why should I believe that you have not installed a
| rootkit or other tech that you did not subsequently
| disclose?_
|
| Because doing that and also disclosing your identity
| would be incredibly stupid?
| ajmurmann wrote:
| > You can't have nice things because you aggressively
| criminalized the white hats
|
| voakbasda even proposed giving a bounty. Is defacing a
| website and spearfishing the users (as is claimed higher
| up in the thread) needed for white hats to do their
| thing? I'm surprised that we aren't all in agreement that
| this isn't at least grey hat behavior.
| rectang wrote:
| It's unclear to me where the line is being drawn and a
| zero-tolerance policy applied with maximum criminal
| penalties pursued.
|
| The whole world sucks: the companies who are slovenly
| with our data, the criminals who exploit that data when
| it is inevitably leaked, the grey hat hackers who
| "joyride to prove they found your keys" to use the
| memorable metaphor from elsethread, the circumstances
| which make probing for vulnerabilities incredibly risky
| because one misstep gets you a prison sentence. the
| resulting feast of vulnerabilities ripe for criminal
| exploitation....
| voakbasda wrote:
| Do you understand that, from the perspective of the
| person suffering an attack, there is absolutely zero
| difference between a good guy that breaks in without a
| contract, permission, or other sort approval and an
| actual bad guy? The act of committing a crime actively
| destroys trust.
|
| Come to me with a list of _potential_ vulnerabilities
| that I can detect and investigate with an open source
| scanner, and we can talk. Come to me after you 've
| already broken in, and you will never be grated the trust
| required to work on security systems.
|
| I think this whole scenario effectively is perjury. Once
| someone has been proven to lie, everything associated
| with that lie needs to be vetted (or simply thrown out),
| because you have demonstrated that this person cannot be
| trusted to tell the truth. Does anyone here think that
| perjury is moral or ethical? Is the scenario presented
| here really that different?
| rectang wrote:
| The "person suffering the attack" is not the only party
| who suffers from an attack -- the individuals whose
| information gets leaked also suffer when a company hoards
| toxic data and it inevitably spills.
|
| From the perspective of _those_ individuals, there is a
| dramatic difference between black hats who exploit their
| data and grey hats who humiliate the toxic data hoarders.
| voakbasda wrote:
| Do you think those individuals will see the difference?
|
| Also, I would argue there is no gray. A white that breaks
| the law cannot be trusted, because they become
| indistinguishable from a black hat that is pretending to
| be a white hat.
|
| This all comes down a matter of trust, and breaking the
| law does not build trust in anyone except other
| criminals. If anything, it erodes trust by demonstrating
| the willingness to skirt the rules when it suits you.
|
| In this case and context, I see the use of "gray hat" as
| an attempt whitewash black hat activities. Once you
| behave like a black hat, you always need to be treated
| like a black hat. Trust is like that, particularly when
| talking about security.
| robtaylor wrote:
| Assertive: Show me something else.
| aaronwp wrote:
| stay tuned
| swdev281634 wrote:
| Was not the best idea to do that. Sega is very traditional
| Japanese company. Consequences are likely to follow, but not the
| legal ones.
___________________________________________________________________
(page generated 2021-12-30 23:01 UTC)