[HN Gopher] VPN by Google One security assessment
___________________________________________________________________
VPN by Google One security assessment
Author : campuscodi
Score : 120 points
Date : 2022-12-11 16:46 UTC (6 hours ago)
(HTM) web link (research.nccgroup.com)
(TXT) w3m dump (research.nccgroup.com)
| quotemstr wrote:
| Some of the findings are ridiculous. To highlight the lack of
| binary obfuscation as a threat (albeit a "low-severity" one) is
| absurd. Kerckhoff's principle [1] applies. Security through
| obscurity (or obfuscation) is no security at all. To highlight
| the lack of binary obfuscation (which, in its maximal form, would
| be the client being open source software) as a security threat is
| the "demand" for security vulnerabilities exceeding the "supply".
|
| If you, as a business, make money from identifying security
| vulnerabilities in applications, then you have every incentive to
| invent vulnerabilities where not exist. And if you're a client of
| such a service, then no matter how conscientious you are, no
| matter how much attention you pay to security, someone,
| somewhere, will be able to claim that he's discovered a
| vulnerability if he's able to make arbitrarily hostile
| assumptions about the underlying platform.
|
| In the limit: "Attack: Program fails to defend against an attack
| in which Intel specially and secretly recognizes this program's
| machine code and sends the contents of its memory space to the
| NSA. Recommendation: Program ceases to exist.". Give me a break!
|
| [1] https://en.wikipedia.org/wiki/Kerckhoffs%27s_principle
| smoldesu wrote:
| If binary obfuscation is a threat, then every iPhone and
| Android handset ships with a pretty broken threat model. I
| don't disagree with what you're saying, but binary fuzzing is
| pretty much par for the course on these matters.
| quotemstr wrote:
| Huh? The system libraries of Android are not especially
| obfuscated
| xnx wrote:
| Google One VPN is a big deal, and potentially as big a nightmare
| for shady data brokers as the iOS cookie apocalypse was for
| Facebook and others. Google One was available to paid subscribers
| on Android, then on Windows, and Mac. Now some Pixel owners can
| use it. Imagine if they turn it on for all Chrome OS users, or
| all Chrome mobile users, or all Chrome users period. You can have
| your suspicions about Google, but I'd trust them over any other
| tech company or VPN provider.
| maximinus_thrax wrote:
| > You can have your suspicions about Google, but I'd trust them
| over any other tech company or VPN provider.
|
| I don't even trust Google to handle my regular online searches
| and now they're doing VPNs? It's quite amusing.
|
| > shady data brokers
|
| You're just exchanging one shady data broker for another
| (Google).
|
| I would not trust Google to keep a product alive unless it
| enables a direct revenue stream or enhances an existing one. A
| VPN ran by Google in my view is an insult.
| sfmike wrote:
| comment so i can come back in a few years to see this was
| proven correct like the plethora of similar downvoted
| comments deemed true.
| foota wrote:
| You pay for the vpn
| humanistbot wrote:
| If you don't pay, you're the product. If you do pay, you're
| still the product.
| WanderPanda wrote:
| Can you elaborate why you trust them over Apple?
| xnx wrote:
| Probably a tie with Apple. I like that Apple isn't trying to
| sell my attention (yet), but I do prefer the higher level of
| openness from Google (Android, Chrome, open source
| contributions, etc.)
| dathinab wrote:
| Maybe due to Apple acts in/wrt. China.
|
| People somehow end up thinking Apple cares deeply about their
| private but AFIK Apple only cares about privacy because it
| makes them money. Or more specifically the image it creates
| helps them in selling phones.But the moment the _long term_
| cost outweighs the benefits they also have shown to bail. (I
| highlighted long terms because there had been case where
| short term it was more expensive because it e.g. involved
| legal cases, but especially this legal cases are a grate
| image profit form them as people will use it like "don't
| worry see they didn't even brake their security to help the
| police with <insert arrested very bad person>").
|
| Now that doesn't mean Apple is bad to some degree it's a win
| win for the user: They get better privacy and Apple gets more
| users. But it doesn't mean you can fully trust Apple (or IMHO
| any Company).
| Spooky23 wrote:
| Google's approach to China is the same realpolitik as Apple
| and privacy.
|
| Google was never going to get what it wanted/needed in
| China, so walking away was a pragmatic stance projected as
| an act of courage, etc.
|
| Google and Apple are actually more similar than one would
| thing. They are both leveraging their ownership of powerful
| segments, and they are both threatened by integrated
| software stacks. The big difference is that Google does
| creepy shit as a first party, and Apple outsources that to
| third parties.
|
| Google's ad business was hurt by Facebook/Instagram and the
| dearb of the web. Apple's existential threat is someone
| making WeChat for the US.
| akimball wrote:
| ...and his name is Musk.
| ThePowerOfFuet wrote:
| > Apple's existential threat is someone making WeChat for
| the US.
|
| It's called Signal.
| Spooky23 wrote:
| WeChat in China is like early 1990s era AOL or Prodigy
| for smartphones - it's a vertical solution.
|
| Signal is an encrypted messenger that have bolted on some
| crypto bullshit. It's just a nerd tool.
| scarface74 wrote:
| Yes and trusting Google who convinced consumers to install
| what was suppose to be an internal iOS development
| certificate so they could track them isn't a bridge too
| far?
| eternalban wrote:
| You can't judge Apple's views based on this matter in the
| short term. Apple has strategic investments in China and
| can't just tell CPC to go pound sand.
|
| However, if Apple still is critically dependent on China in
| say 5 years, then it would be fair to look at the corporate
| response to Chinese demands and reach conclusions. If they
| truly care about privacy and the rest of it, they should be
| well into their transition plans to other locations about
| now. And industry expert would be able to tell if the
| effort, e.g. to use India instead, failed due to an
| insincere effort on the part of Apple (for PR), or the
| unpleasant fact that China is not a commodity resource
| pool, yet.
| Shebanator wrote:
| People always make this argument, for at least the last
| 10 years. Since Apple has not yet shown any real action
| towards reducing their dependency on China, why would it
| be any different 5 years from now?
| eternalban wrote:
| I don't know the specifics of Apple's case. There is no
| real need, imo. I'll elaborate why I say that.
|
| Apple, an American company, made critical decisions based
| on what was said, disclosed, or strongly implied publicly
| and privately regarding both national and international
| political (leadership) consensus on globalization and
| China. Disregarding orthogonal ideological e.g.
| "(American/x) corporations are inherently bad",
| considerations, -imo- fair minded evaluation of Apple (or
| any ~Western company) would need to give weight to what
| political leadership, at both national and international
| levels, have done and are doing.
|
| Let's say Apple leaves China cold turkey, on principal.
| The abrupt and unprepared move ultimately ends the golden
| ride and Apple is out of the handset market. Who are we
| then buying our smart spyware/assistants from? Some
| company in Asia, correct?
|
| If the principal is truly that important (and I am -not-
| saying it isn't) then political leadership has to
| uniformly decouple from China. Why should Apple shoulder
| what is ultimately a geopolitical cost? And yes, you and
| me desiring a world according to _our_ value system where
| the state can not illegally force private organizations
| to violate human rights _is_ a geopolitical desire. So
| Apple should do certain things, but political leadership,
| and consumers, also need to do their bit as well.
| SadTrombone wrote:
| Apple has already started moving iPhone production to
| India/Vietnam with the goal of having 40-45% of their
| iPhones being produced there. It's far from a trivial
| move.
| judge2020 wrote:
| For reference, this seems to have started with Airpods
| which have been (in part) assembled in Vietnam since
| 2020: https://redd.it/gokl90
| smoldesu wrote:
| Which means it took them over a decade to start moving
| their assembly lines out of China. The GP comment still
| stands, Apple clearly had no urgent interest in ending
| their relationship with China. A cynic might even suggest
| that they're only doing it now that the geopolitical
| tensions are favoring the US...
| Retric wrote:
| What exactly are you thinking of as the starting point?
| Apple only started production of iPhones in 07 and things
| didn't instantly fail.
| syrrim wrote:
| Its a service that costs them money. Unless it gains them money
| somehow, they won't hand it out for free.
| innocentoldguy wrote:
| I trust ProtonVPN over anything Google makes.
| oars wrote:
| What is Google's long term strategy with this product? Or will it
| be another thing they deprecate in a few years...
| svet_0 wrote:
| Really boring report, nothing of real essence. Either G's product
| is bullet-proof or the research quality from NCC has
| deteriorated.
| nibbleshifter wrote:
| The quality of work from NCC has always been really
| inconsistent. It also depends on "which part of NCC" you were
| looking at.
| Aachen wrote:
| > It also depends on "which part of NCC" you were looking at.
|
| For the security consultancy part of NCC, it's not like I
| catalogue and re-check their findings so probably this is
| biased, but the only report I remember is the one from
| Keybase where they failed to notice that the claimed end to
| end encryption trusts the server to deliver the right keys.
| This was tested together with some other people on HN and
| packet capturing (one theory was that it checks the third-
| party websites like reddit/HN/... proofs, and that it's user
| error if you don't have any, but no, not even that).
|
| I was really surprised by both Keybase getting something so
| fundamental wrong (they claim some blockchain magic
| verification which you can do on the command line, but the
| app doesn't have a blockchain client and no manual fallback
| either, so it's never verifying anything and instead fully
| trusts the centralized Keybase-operated proprietary servers)
| and by NCC not noticing this problem. Someone I knew from the
| security stackexchange site and whom I admire greatly took
| part in the audit, but of course they never replied (not even
| declining to comment) when I emailed them with a question
| about how this verification works (at that point, I still
| felt like I must be missing something so this email wasn't
| phrased accusatorily).
|
| I don't have a bad impression of NCC in general and we all
| make mistakes, but yeah that's the example that stuck in my
| mind.
| nibbleshifter wrote:
| My experience with them has been both knowing a large
| number of their consultants at certain locations, and
| reading the reports they issue for network and webapp
| pentests.
|
| Work quality on those was a real mixed bag, some was
| _terrible_ , some was _excellent_. It didn 't leave me with
| much faith in their QA process.
|
| Similar experiences with other larger consultancies (MWR -
| now FSecure, etc).
| bri3d wrote:
| The actual pentest findings here are pretty boring, but the
| architecture details are quite interesting, as are the identified
| risk models for the attempt at "anonymity."
|
| The fundamental anonymity model Google have employed is that the
| VPN tunnel is authenticated using "blind tokens" which are signed
| by an authorization service "Zinc." The Zinc service validates
| OAuth credentials passed in from the client app to verify a
| user's subscription, then signs the blind token created by the
| client app to authorize the user's session with the actual VPN
| backend "Copper." Ostensibly, this will separate the user's
| Google identity from the VPN identity since the VPN service only
| knows that it got a valid, signed user token, not what user that
| token belongs to.
|
| Ultimately, as pointed out in the report, this is IMHO not that
| useful. Google could still trivially re-correlate traffic inside
| of the VPN service by using packet inspection combined with a
| crafted (or accidental!) plaintext sidechannel from an
| authenticated Google client application (unencrypted DNS requests
| to a special subdomain, port/IP knocking style requests to a
| specific list of IPs in a specific order, etc.).
|
| Also, if there's timestamped logging in both the Zinc and Copper
| services, the attempt at blinding between the two sides of the
| system is also quite meaningless since the flow to Zinc and the
| flow to Copper could just be back-correlated back into a user
| identity using timing by a Google employee with access to logs.
| rasz wrote:
| >The fundamental anonymity model Google have employed is that
| the VPN tunnel is authenticated using "blind tokens" which are
| signed by an authorization service "Zinc." The Zinc service
| validates OAuth credentials passed in from the client app to
| verify a user's subscription, then signs the blind token
| created by the client app to authorize the user's session with
| the actual VPN backend "Copper." Ostensibly, this will separate
| the user's Google identity from the VPN identity since the VPN
| service only knows that it got a valid, signed user token, not
| what user that token belongs to.
|
| In video form for those not keeping up:
| https://www.youtube.com/watch?v=y8OnoxKotPQ
| encryptluks2 wrote:
| I pay for Mullvad and also have Google One. I use Google One
| for times that I have to connect to an unknown WiFi network but
| don't care about if Google knows it is me. I use Mullvad when I
| want some confidence in not being tracked.
| class4behavior wrote:
| >but don't care about if Google knows it is me
|
| This is not what privacy primarily is about, though. Taking
| Cypher's choice is also not some profound path.
|
| While privacy is indeed a measure protecting one against
| becoming a victim or a target of malicious intent. First and
| foremost, however, privacy determines who is in control and
| has power over you through (access to) knowledge related to
| you. It's a human right that shapes all aspects of any
| society, not just some best practice like locking one's door.
|
| The problem is people don't think or care much about far away
| ills, that's why it is too easy for us to dismiss and
| normalize the degradation of our rights.
| chiefalchemist wrote:
| Why not Mullvad all the time?
| jonas-w wrote:
| Because mullvad exit IP's may be more suspicious than other
| vpn providers IP addresses.
| eightails wrote:
| I'm curious, why would that be the case?
| sneak wrote:
| Because Mullvad allows for totally anonymous payment, and
| is thus preferred by the nefarious.
|
| I assume Google only allows you to pay with a payment
| card as usual, and even getting a Google account requires
| a phone number now.
|
| I use Mullvad all of the time. Some of their locations'
| public exit IPs are blocked for some/all services. Can't
| log in to HN from some of them, can't use Etsy at all
| from some of them, can't checkout on eBay from some of
| them, etc.
| encryptluks2 wrote:
| Google already has my personal information. Using my Google
| account while logged into Mullvad isn't a good idea cause
| then I leak that to Google and whoever else they share data
| with.
| worldsavior wrote:
| Probably to separate identities.
| skybrian wrote:
| Although it might not be very reassuring to people on the
| outside who treat Google as a monolithic threat, internal
| controls like this do make insider attacks more difficult, by
| making them more visible to internal security and audits.
|
| In particular, it sounds like it would be difficult to do it
| via log analysis alone; it would have to be a more active
| attack and may require code changes. Either changing code in
| source control or somehow swapping in a different binary can be
| pretty noisy (assuming there are good controls over the build
| and deploy process).
|
| You might compare with having good accounting. You can't just
| trust people when they say, "we have strict accounting" because
| they could very well be lying, but nonetheless, _not_ having
| proper accounting is a red flag.
| World177 wrote:
| > The fundamental anonymity model Google have employed is that
| the VPN tunnel is authenticated using "blind tokens" which are
| signed by an authorization service "Zinc." The Zinc service
| validates OAuth credentials passed in from the client app to
| verify a user's subscription, then signs the blind token
| created by the client app to authorize the user's session with
| the actual VPN backend "Copper." Ostensibly, this will separate
| the user's Google identity from the VPN identity since the VPN
| service only knows that it got a valid, signed user token, not
| what user that token belongs to.
|
| In case you needed even more information on how this works,
| Google provides on their page [1]
|
| > The blinding algorithm employed was first described by Chaum
| in 19826, and is commonly referred to as 'RSA Blind Signing'.
| The goal is to never use the same identifier in the
| Authentication server and the Key Management Service. To
| accomplish this, the client generates a token, hashes it using
| a Full Domain Hash, and combines it with a random value and the
| server's public signing key to produce a blinded token. That
| blinded token is then signed by our authentication server. When
| the client wants to connect to the VPN, it can unblind the
| blinded token and its signature using the random value only it
| knows. The unblinded token and the signature are then
| verifiable by our Key Management Server.
|
| This is an easy way to understand how this works. This actually
| does not sound like the start-of-the-art for cryptography. In
| their example, the client has to provide the secret, which,
| then corresponds to a hash that was encrypted using this secret
| key along with Zinc's public key. Zinc knows who provided it
| the blinded token that it signed. On revelation of this secret
| key, Zinc could immediately determine who is the owner.
|
| Tornado Cash bypasses this leak of information by using a zero
| knowledge proof involving a Merkle path [2]
|
| [1] https://one.google.com/about/vpn/howitworks
|
| [2] https://github.com/tornadocash/tornado-core
| jmole wrote:
| FTA [1]: "the only piece that links the authentication server
| to the data tunnel server is a single, public key, used to
| sign all blinded tokens presented during a limited period of
| time"
|
| It sounds like the only information Google could produce is:
| "yes, this traffic came from our VPN, here is a list of the
| 20,000 users who logged in during this key rotation period"
|
| Am I missing something?
| bri3d wrote:
| Yes, this is the concept, and the NCC report correctly
| identifies a lot of deltas with reality, which also I
| outlined in my comment above:
|
| * Google or a privileged attacker sitting at either end of
| the VPN could exfiltrate the user's identity through packet
| inspection and unencrypted personal identifiers in the
| user's VPN traffic, both obvious identifiers like plaintext
| data on the network and less obvious identifiers like
| specific subdomains or IPs correlated to a specific set of
| identities. This is a fairly fundamental problem with all
| "privacy" VPNs - as soon as you've exposed your identity by
| logging into a service over the VPN link, you're fairly
| easily unmasked again to a privileged attacker.
|
| * If the authentication and VPN services have timestamped
| logging, the user's identity leaks through this side
| channel. At Google scale of course there's still bound to
| be some mixing, but it's a lot more granular than a key
| rotation period.
|
| * A privileged attacker could also perform network analysis
| between the authentication and VPN services as well to
| achieve the same goal.
|
| Perhaps Google have some lightweight countermeasures
| against these types of attack (even a random some-
| milliseconds timing at the client side between receiving a
| signed blind token and using it to establish the VPN
| connection would help a tiny bit), but if they do, they
| weren't outlined in the report.
|
| My takeaway from this is "Google made a fine VPN, but it's
| nothing ground breaking, and so-called Privacy VPNs are
| still not a thing." Depending on your threat model a VPN is
| still a very useful tool, but none of them can truly
| anonymize you.
| sneak wrote:
| Your set of queried host names via DNS (unencrypted) is
| probably globally unique or close to it.
|
| Additionally SNI is (usually) not encrypted so all of
| your browsing hostnames are sent in the clear (over the
| VPN) even when using TLS and DoH.
| World177 wrote:
| > Perhaps Google have some lightweight countermeasures
| against these types of attack (even a random some-
| milliseconds timing at the client side between receiving
| a signed blind token and using it to establish the VPN
| connection would help a tiny bit), but if they do, they
| weren't outlined in the report.
|
| The client could pre-generate session tokens days or
| months in advance. Then, leave their house, head to
| public wifi, (to avoid traffic correlation on their home
| network) and even after unblinding the session token, it
| would not be feasible to determine who paid for VPN.
| World177 wrote:
| If Google's VPN used Tornado Cash's protocol, they wouldn't
| be able to take the proof of having a session key back to
| Zinc to determine who asked for that to be signed. (if we
| assume Zinc is malicious) The protocol used looks like it
| was to provide anonymity with third parties, but not
| anonymity to the first parties. Google also links to the
| paper from 1982 that they're referencing. [1]
|
| [1] https://www.hit.bme.hu/~buttyan/courses/BMEVIHIM219/200
| 9/Cha...
| World177 wrote:
| > This actually does not sound like the start-of-the-art for
| cryptography.
|
| I'm wrong. Review the paper Google provided as a source for
| more information. [1] There are also lectures available
| online from Ronald Rivest, [2] one of the researchers that
| helped to invent/discover the RSA cryptosystem. (If you are
| not familiar with how RSA works)
|
| [1] https://www.hit.bme.hu/~buttyan/courses/BMEVIHIM219/2009/
| Cha...
|
| [2] https://www.youtube.com/watch?v=v4qaxSKTVlA
| jeffbee wrote:
| It seems very silly to analyze the Google VPN with Google as
| your privacy adversary. The point of the VPN is to hide your
| traffic from 9th-circle privacy adversaries like local
| telephone companies.
|
| It also seems pretty silly to mix in interesting findings in a
| big PDF sprinkled among other items that are, speaking
| generously, debatable matters of opinion, like whether programs
| should be stripped or not.
| zoklet-enjoyer wrote:
| Yeah, I just use it so my employer can't see what websites
| I'm browsing
| clay-dreidels wrote:
| You should use a second cheap computer for that.
| zoklet-enjoyer wrote:
| That would be impractical
| bri3d wrote:
| I agree with this - VPNs protect you from only the ends of
| the chain, while centralizing the middle, and it's rather
| foolish to think that any VPN construct will reliably blind
| traffic and identity from the VPN provider. Depending on your
| threat model this is a good thing or a bad thing overall.
|
| However, this "privacy" goal is something that a huge number
| of VPN providers, including Google, claim as a goal and
| market heavily against, so it's fair to assess against it
| IHMO.
|
| As for the second part, yeah, that's pentest reports in a
| nutshell. A few nuggets of information combined with
| checklist-satisfying irrelevant findings from automated
| tooling.
| michaelmrose wrote:
| A VPN service that doesn't log should reliable be expected
| to protect you from dragnet style surveillance that is
| initiated after the fact. If the government asks google for
| everyone who searched for a term in a range of dates will
| return a list of IPs and times. Comcast may choose to
| happily connect your real identity when asked but a VPN
| provider that didn't log can honestly answer that they have
| no way to establish that.
|
| At another level a vpn provider that doesn't operate in
| country can reliably refuse orders to surveil you in the
| future wherein orders don't rise to the level of obtaining
| international cooperation AND don't meet standards higher
| than the nonsensically low standards in say the US.
|
| This wont be enough to protect your international criminal
| empire but might well provide reasonable assurance of
| privacy to concerned individuals. I'm sure the google one
| is good for something....
|
| Perhaps keeping your ISP from chiding or blocking you for
| pirating music until that hole is plugged...or keeping
| people who access http websites over coffee shop wifi safe?
| Izmaki wrote:
| One such "interesting finding" is that the VPN software is
| run with admin privileges, including the parts that deal with
| concepts which would not need admin privileges, such as debug
| logging, because this increases the attack surface for a
| hacker who already has limited control of the machine, to
| _potentially_ elevate their privileges, _if they manage to
| find vulnerability_.
|
| This is one of their medium findings. This report smells like
| "Guys, we _MUST_ find something, give me something, anything!
| "... :P
| jeffbee wrote:
| Yeah, I love the finding that an attacker in possession of
| your unlocked iOS device could discover your email address.
| Big news!
| bri3d wrote:
| These reports usually start from a baseline
| template/checklist. I'm sure the "unobfuscated binary" and
| "admin" findings that everyone on this thread is making fun
| of are part of the SOP for NCC Group audits of native
| applications.
|
| We laugh and argue that this makes pentests a joke (which
| plenty are!), but at the end of the day that "admin" note
| resulted in a material improvement, however tiny, in the
| system. Google reacted by removing the need for escalation
| from the application and now an attacker needs one more
| exploit to build a full chain.
| nibbleshifter wrote:
| They are SOP/standard findings.
|
| You work through a checklist for arse covering, which
| generates a lot of meaningless/low value findings.
|
| If you omit a lot of those issues, the client thinks you
| didn't do the work sufficiently.
|
| If you put in too many of those findings, the report gets
| made fun of for containing "filler".
| dfc wrote:
| What does "9th-circle privacy adversary" mean?
| xnx wrote:
| Reference to Dante's Inferno and the 9th circle of hell. I
| believe it's away of saying "very very bad" advertisers
| here.
| jokowueu wrote:
| I hope they audit outline next
| Izmaki wrote:
| Writing that report must have been hell with - I bet - all the
| attention from Sr. management.
| sillysaurusx wrote:
| The actual process of writing those reports is generally pretty
| laid back. It was hell mostly because it consumes all your
| time. Only about 50% of your time is spent hacking, if that.
| The rest is poured into that damn report.
|
| It goes through multiple rounds of review, and every small
| detail is caught and corrected. Which is how it should be. It's
| just the opposite of fun.
|
| The report is literally the product. The implications of that
| didn't sink in until a few months on the job.
| bink wrote:
| I'm going to stay away from the tire fire of accusations
| between you and the former NCC boss. But I do want to say
| that having run a pen testing firm for many, many years these
| claims aren't hard to believe. It does depend on what you
| consider to be time spent working on a report, however.
|
| 50% time actually hacking even sounds kinda high depending on
| how the organization is structured. We had some engineers
| that just absolutely sucked at writing documentation but were
| wizzes at hacking so they got away with passing some of the
| documentation off on others (dramatically increasing the
| other engineers workloads, but sometimes that's ok). We also
| had some folks that were just great at formatting and dealing
| with client requests for docs and they could take workload
| off of engineers and make everyone happy.
|
| An engagement ranged from 3 days to 6 weeks, with the average
| around 3 weeks. A typical internal pen test would be 2 weeks
| onsite followed by a week for writing the report and handling
| feedback (and travel and meetings and blah blah blah). It
| wasn't unusual for a client to request that all references to
| X be changed to Y or the word "should" be changed to "shall"
| or whatever other thing client management cared about. That
| time can add up quickly and make your job miserable.
|
| Every single pen testing firm goes through a phase where they
| realize they are "wasting" weeks of effort on writing their
| reports. And they all decide they're going to automate as
| much as they can to improve their profit margins and employee
| happiness. They all usually come up with a snarky name for
| the new tool that does this automation. Most of the time the
| generation tool saves pain when it comes to formatting
| tables, table of contents, glossaries, etc., but wastes time
| wrestling with client expectations that their report be
| unique or focus on unusual areas. Then the engineer has to
| figure out how to modify the auto generated report without
| breaking any future changes to finding severity or writeup.
|
| Some people would also consider time spent presenting the
| report to a board of directors as time spent "working on a
| report" and I wouldn't really disagree. It's not unusual to
| send an engineer and a manager to do a presentation at a
| remote site after the engagement finishes. That'll include
| 1-2 days of travel time and the time the group spends trying
| to "up sell" the client on further engagements.
|
| It's been some years since I've been in that field but back
| in my day a break down of a 3 week engagement could look
| something like this:
|
| First week is the fun part. You probably spend 30-40 billable
| hours on actual discovery, scanning, testing, hacking. And if
| you're smart you spend 3-5 hours on the early design of the
| report. The client might want a focus on the API or on WiFi
| or whatever and you arrange things so the report is organized
| properly early on and the team can input their findings as
| needed (or input them to the automated system which might not
| have a dedicated category for this focus area).
|
| Much of the second week could be considered part of writing
| the report. You've ideally root'd most of the infra in the
| first week and now you're using tools and manual inspection
| to find every single vuln you can in the networks. You're
| using automated tools to find common mis-configurations and
| missing patches. Those tools are (hopefully) connected to
| your automated report generating system somehow so you don't
| need to format them manually. But there's always new stuff on
| each engagement that'll need some custom write-ups and
| severity determination will need to be adjusted depending on
| the client's security posture and risk profile.
|
| The third week is 1-2 days getting the report ready, though
| that can extend further for unique engagements. Then there's
| submitting it to a tech editor, submitting it to management,
| submitting it to the client and dealing with corrections from
| all of them. It's extremely rare for a client not to have
| comments and changes that need to be addressed and that's
| another 2-8 hours.
|
| Pen testing is difficult -- the travel, the presentations,
| the sales, the report writing. I'm not saying what this
| commenter is claiming is true, but the job is not how most
| people imagine it to be. I'll also say that that's not a
| criticism of NCC at all. All firms I've worked for, managed,
| or talked with are like this.
| tptacek wrote:
| I'm not this person's former boss, nor was I ever an "NCC
| boss" of any sort. I'm an IC.
| bink wrote:
| Thanks for the clarification.
| rfoo wrote:
| Not to mention you HAVE to put some really lame
| recommendation here.
|
| "Application binary is not obfuscated", seriously, wtf?
| hsnewman wrote:
| Since it's not obfuscated the binary can be decompileed
| easily and any coding issues can be identified more easily.
| [deleted]
| saagarjha wrote:
| This is a good thing.
| Spooky23 wrote:
| The point of an assessment like this is to identify
| potential risks. That one is categorized as
| "informational", and isn't a defect.
|
| These types of recommendations are useful in context. You
| may have a moderate finding or series of findings that in
| context are more significant because of some informational
| finding.
| tptacek wrote:
| Nobody spends 50% of their time writing reports. If anything,
| it's the exact opposite: people do their entire doc the last
| Friday of the test. It's enough of a problem that carefully
| scoped tests sometimes include doc days in the SOW. I used to
| get in arguments with the person we hired to run one of our
| offices for demanding that people stay late on Fridays to
| finish docs that customers weren't even going to look at
| until the following week.
|
| This "multiple rounds of review" thing is news to me too.
|
| To answer the original comment: the ordinary process for a
| project like this with a public report is, you deliver the
| whole project as if there wasn't a public report, with an
| internal report, and then the public report is based on the
| internal one (probably post-remediation). So actually
| delivering the project is usually no different than any other
| project.
| sillysaurusx wrote:
| Tom, I literally worked at your company. Matasano.
|
| Let's just say the culture changed after you left. Is that
| so hard to believe?
|
| You probably didn't spend 50% of your time writing reports,
| but Andy and the rest of my coworkers did.
|
| Working at Matasano was one of the biggest disappointments
| of my career. It was basically the exact opposite of
| everything you pitched it to be. But I should've known
| better than to be deceived by marketing.
|
| The actual hacking was sometimes fun, though. But again,
| that was _maybe_ 50% of the time, if you're lucky. Would
| you like me to break down an actual engagement timeline as
| a refresher?
|
| An engagement lasts a week, max. Usually it's two days.
| That means day one is hacking, day two is reporting.
|
| The weeklong engagements are where the scales sometimes
| tip. But even by day three, you had better start writing up
| your findings, or else the report won't be ready by Friday,
| and the client (or more specifically Wally, our manager)
| will not be pleased.
| thaumasiotes wrote:
| I literally worked at NCC Group after it had acquired
| Matasano.+
|
| Your idea of the timeline does not match my experience.
| The normal engagement length is two weeks. (One week is a
| possibility, and I was on an engagement that was four
| weeks, but that was unusual.) In the one-week or two-week
| case, the final Thursday was devoted to writing the
| report, and the final Friday was devoted to presenting
| it. It would have been very unusual to spend 50% of total
| time on writing the report, though for something this
| high-profile (I was never on something similar) I
| wouldn't be shocked.
|
| + Since they've come up, I would feel remiss if I didn't
| mention that they fired me for no stated reason++ with
| zero days notice and zero severance. At the meeting where
| I was fired, they assured me that my benefits would not
| end immediately and would instead continue through the
| end of the month, despite the fact that it was October
| 31.
|
| ++I have a strong guess about the reason. It wasn't a
| good reason.
| gardenhedge wrote:
| Take it to direct messages or a phone call since the two
| of you seem to know eachother. HN is not the place for
| this personal conversation.
| tptacek wrote:
| Yes, it's hard to believe. You worked at Matasano/NCC for
| like a minute immediately after I left. What you are
| describing is nothing like the ordinary workload at NCC;
| the modal NCC engagement is 2 people, 2 weeks --- in
| fact, that's the modal engagement across the entire
| industry, not just at NCC. "A week, max". Sheesh.
|
| I'm not interested in defending NCC; I have no interest
| in NCC (other than the friends of mine who still work
| there, I guess) and haven't since 2014. But I'm an
| inveterate message board nerd and I'm going to damn sure
| correct false things said about pentesting on HN.
|
| In this instance, I'm going to go ahead and say I have
| better information about this than you do.
|
| It's "Thomas", by the way.
| sillysaurusx wrote:
| I worked there for a year, before they fired me after I
| was diagnosed with Narcolepsy:
| https://news.ycombinator.com/item?id=33765309
|
| During my stint, I completed over 40 engagements. I never
| failed to find less than at least one medium severity
| flaw on any of those.
|
| Of course, you wouldn't know, since you showed up a grand
| total of ~two times. One was to brag about how Sam Altman
| called you up to offer you a slot at YC for Starfighter,
| another was to ask Wally for the rights to
| Microcorruption.
|
| Meanwhile, I was the one in the field, doing the work,
| alongside Andy and the rest of the team. We spent a huge
| portion of our time writing. I'll hop on a computer and
| describe it in more detail.
|
| It's funny that you declare "No one spends 50% of their
| time writing" like you're the sole authority on the
| matter, across all the pentesting shops in the world. You
| didn't even get it right at your own company.
|
| Saying that I worked there "for like a minute" is cute,
| but not reflective of reality. Shall I go into more
| detail about my experiences? We can compare notes, and
| see where your experience diverged.
| tptacek wrote:
| I've never spoken to anybody in management at NCC about
| why they fired you, but your coworkers have told stories
| about it, and I'm not sure you want the conversation
| you're asking to have. It is not the same story you tell.
|
| I don't know why you're shocked at the number of times I
| "showed up" at the NCC Chicago office. I'll say it again:
| you and I never worked together. NCC hired you several
| months after I left. I know this because I still have the
| email thread of you asking me about the interview
| process. How often do you show up at ex-employer offices?
|
| Wally was, at the time, your line manager at NCC. You get
| what a line manager is, right? Nobody was seriously
| asking Wally for the rights to anything, but I have no
| trouble believing you have trouble interpreting a joke.
|
| You just claimed, a comment earlier, that NCC engagements
| were "a week, max". I stand by my previous comment. I
| have better information, and you have _weird_
| information. If your personal experience of NCC was a
| back-to-back sequence of 2-day projects, you were put in
| some strange back-bench situation.
|
| I left Matasano and almost immediately started another
| consultancy, and I'll said it again: unless things have
| drastically changed since 2020, when I left for Fly.io,
| the modal pentest engagement is 3 person-weeks, not 2
| days. People do not in fact spend 50% of their time
| writing reports.
|
| We can compare notes if you like, but I don't think it's
| going to make you any happier to do so.
|
| _A moment later_
|
| I'm just sitting here thinking about this and your claim
| about documentation is even more risible than I had
| realized. By the time you were working at NCC,
| documentation was almost fully automated; we had the
| whole team working with Litany of Failure, our docgen
| system. Litany became such a big deal after I left that a
| year ago, at someone's going-away party, Erin and I got
| pins, which they'd made for everyone who worked on it.
| You were sure as shit using it in 2014.
|
| By the time you were working at NCC, the experience of
| documenting a pentest was filing bugs in a bug tracker
| that autogenerated descriptions of things like XSS, and
| then pushing a button to have a PDF pop up.
|
| 50% of your time. Give me a break.
| tptacek wrote:
| I spoke with someone who worked contemporaneously with
| you, and who thinks you might just be confused --- your
| "projects last 2 days and typically consist of 50%
| documentation" claim would be consistent with you working
| SARs rather than pentests. What you're describing does
| sound a lot more like arch review than pentesting. Maybe
| that's the miscommunication here?
|
| _Edit_
|
| Somebody else pointed out that if you were mostly working
| on retests --- that's the project where you take someone
| else's report and look to see if they fixed all the
| findings --- you'd also have had a 1-3 day documentation-
| intensive workload.
|
| Another person pointed out that netpen projects fit this
| bill too, but of course, you weren't doing back-to-back
| runs of network penetration tests out of that office.
|
| All I care about is the claim upthread that people spend
| 50% of their time on pentests documenting things. If you
| engage a firm to do a software assessment and they spend
| 50% of their time in doc, fire them.
| cj wrote:
| As a fellow YC founder, just wanted to pipe in to say
| this whole thread really isn't a good look.
|
| Whether or not the employee was right or wrong, it's
| probably best to just leave and accept the conversation
| where it is.
|
| Edit: changed "your employee" to "the employee"
| nr2x wrote:
| ESH as they say on Reddit.
| tptacek wrote:
| I'm fine with that. I'm not writing here to make myself
| look better, just to correct the record. Some
| pathological shit happens at US security consultancies;
| it's just (in the main) not the stuff this person is
| talking about.
|
| Again: I have never worked with this person, despite
| their claims and implications to the contrary. Further, I
| went way out of my way not to be a people manager (and
| still do). Nobody reports to me; that's not a thing I'm
| good at, as I'm probably making evident.
| tptacek wrote:
| This person _was not my employee_. Even if I had been at
| the firm at the same time, he _still_ wouldn 't have been
| my employee.
|
| It bothers me that they've continued to claim over the
| years that they were "fired for narcolepsy". The NCC I'm
| familiar with paid multiple people for _years_ who weren
| 't able to deliver work because of health issues, and did
| so without blinking. There's not a lot that I especially
| like about NCC management, but their handling of health
| and disability issues would have been at the bottom of my
| list of complaints.
| cj wrote:
| Edited my comment to say "the employee" instead of "your
| employee".
|
| Either way, this thread really isn't a very good look for
| you or the person you're responding to.
|
| Sometimes not engaging is better than engaging. (And here
| is when I myself will stop engaging...)
| thaumasiotes wrote:
| > It bothers me that they've continued to claim over the
| years that they were "fired for narcolepsy". The NCC I'm
| familiar with paid multiple people for years who weren't
| able to deliver work because of health issues, and did so
| without blinking.
|
| Maybe they've got special policies on health. I am not
| impressed with NCC Group's firing policies. I was fired
| because, as far as I can see, I was recruited to the bug
| bounty team with an offered perk (work from anywhere), I
| said I wanted the perk, they said no problem, my boss was
| promoted, and the new boss didn't want to allow the perk.
| He made repeated comments to the effect that he wasn't
| comfortable having me on his team after I'd made a
| request that he wasn't willing to grant immediately. He
| also told me that, if the worst came to the worst, I
| would be transferred back to active consultant status
| rather than being fired, which didn't happen.
| tptacek wrote:
| I am certainly not sticking up for NCC's policies about
| firing, or even Matasano's.
| nr2x wrote:
| Your ass is showing bro.
| sillysaurusx wrote:
| > your coworkers have told stories about it, and I'm not
| sure you want the conversation you're asking to have. It
| is not the same story you tell.
|
| I hereby give you permission to lay out the full story,
| full-stop. Don't hold back. I know it's usually not a
| classy move, but you're hereby absolved of that.
|
| > I don't know why you're shocked at the number of times
| I "showed up" at the NCC Chicago office.
|
| I wasn't shocked. It was to point out that you weren't in
| the field anymore. We were.
|
| > Wally was, at the time, your line manager at NCC. You
| get what a line manager is, right? Nobody was seriously
| asking Wally for the rights to anything, but I have no
| trouble believing you have trouble interpreting a joke.
|
| No doubt. But that raises the question of why you met
| with Wally privately. That's a bit at odds with your
| immediate previous claim that there's no reason to show
| up to your ex-employer's office. But this is just a
| distraction.
|
| > You just claimed, a comment earlier, that NCC
| engagements were "a week, max". I stand by my previous
| comment. I have better information, and you have weird
| information. If your personal experience of NCC was a
| back-to-back sequence of 2-day projects, you were put in
| some strange back-bench situation.
|
| Pop quiz: If I worked there for a full year -- 52 weeks
| -- then how did I complete over 40 engagements?
|
| > I'm just sitting here thinking about this and your
| claim about documentation is even more risible than I had
| realized. By the time you were working at NCC,
| documentation was almost fully automated; we had the
| whole team working with Litany of Failure, our docgen
| system. Litany became such a big deal after I left that a
| year ago, at someone's going-away party, Erin and I got
| pins, which they'd made for everyone who worked on it.
| You were sure as shit using it in 2014.
|
| Yes, we used Litany exclusively for our docgen. That's
| the templating system that creates the PDFs. It's quite a
| nice system; thanks for making it.
|
| It doesn't change the fact that you have to actually fill
| it out with words.
|
| > 50% of your time. Give me a break.
|
| We did.
|
| So, let's hear those stories. I imagine it'll go
| something like this: "That Shawn was such a slacker that
| we couldn't figure out what to do with him. He spent most
| of his time fooling around on the computer. At one point
| he went to the bathroom for a full half hour."
|
| My work spoke for itself. Pentesting was a necessary part
| of the job, but the product is the report. The report is
| what the client sees, and (in some cases) what they pay
| several hundred thousand dollars for. Is it any surprise
| that a huge amount of time is spent preparing this
| extremely-valuable product for consumption? This isn't
| even a particularly strange claim; Damien was the one who
| pointed out to me that the PDF is what clients are paying
| for.
|
| I'd be interested to compare notes. What did you have in
| mind?
| tptacek wrote:
| See upthread.
|
| I think these claims are pretty much risible. The report
| you're talking about is autogenerated from a bug tracker.
| The client sees your bugs. If you spend 50% of your time
| writing them, you're cheating the client.
|
| What I really think happens is that you misinterpret
| things people tell you and blow them into weird
| directions. Somebody told you that "the PDF is what
| clients are paying for". Well, no shit. The PDF is the
| only deliverable from the project. It's where the bugs
| are written down.
|
| I wasn't there for whatever you got told, but it sounds
| to me like the subtext of it was probably "so it doesn't
| matter much what you do on a project as long as the
| client ends up with a report that they can use to claim
| they've had an assessment done". That's a cynical thing
| to say, but definitely a thing that gets said. It's also,
| strictly speaking, true.
|
| What I hear you saying is, "the PDF is the only thing
| that matters, so we should spend most of our time making
| the best possible PDF". That's not only silly, but also
| actively difficult to do in a practice where the reports
| are autogenerated.
|
| The actual figure of merit from a software pentest is the
| list of bugs, full stop. Yes, the list is delivered in
| PDF. Don't let that confuse you.
|
| I don't think you've worked in this field seriously
| enough or long enough to use the word "we" the way you
| are.
| thomc wrote:
| Can confirm a 2 day engagement is unusual, and 50% of
| time writing the report is possible but very much an
| outlier for standard pen tests. Some interesting
| exceptions include:
|
| * Some regions have a much shorter average engagement
| time. North America is usually pretty generous, where
| markets in other countries will only bear half or a third
| of the time.
|
| * If you are a junior or less skilled you are perhaps
| more likely to get the small jobs while you are learning.
|
| * External inf can be short on testing time and long in
| reporting if you find lots of issues, but automation
| helps the reporting in that regard.
|
| * Some pentests are very documentation intense for
| specific reasons, such as M&A due diligence, or clients
| who want threat models and design reviews incuded. Still
| isn't 50% though.
|
| And others. But in general what Thomas describes has been
| my experience over the years.
|
| Disclaimer: I work for NCC, but nothing related to former
| Matasano and I don't know Thomas. Opinions are my own.
| sillysaurusx wrote:
| Thank you. (And thanks for being dispassionate; it's a
| nice change.)
|
| It sounds like the most likely explanation is that
| Matasano was an outlier. My career was cut short before I
| had a window into the rest of the pentesting world, but
| it's good to hear that places exist that aren't so
| obsessive about the actual writing process. I also
| happened to experience most of your list, so it sounds
| like it was an exceptional situation in general, so it's
| best not to draw sweeping conclusions from it.
|
| Cheers!
| bink wrote:
| It's interesting to read about other philosophies for
| engagements. In the places I've worked it would be rare
| to send a junior engineer on a short engagement. The
| reason being that short engagements are usually 1
| engineer, maybe 2. There are always tools and tests that
| take time and it's better to have 1 engineer for 2 days
| than 2 engineers for 1 day. We'd send our junior
| engineers on the multiweek engagements so they'd learn
| more. They'd get a chance to encounter all types of
| systems and networks, and would be able to see how the
| senior engineers approach problems. We could even leave
| them to figure out complex topics on their own in some
| cases (and often they'd teach us new things in the
| process!).
|
| But as I said in another comment, depending on what
| people consider to include as "report writing" I can
| definitely see some engagements needing 50% time there.
| So maybe this person did just get unlucky.
| tptacek wrote:
| Sub-week software pentest engagements at established
| firms are pretty rare. There's a logistical reason for
| that: engagements are overwhelmingly measured in
| person/weeks, and if you book out a consultant for two
| days, you fuck the schedule for the rest of that person's
| week. It's the same reason (or one of them) that you
| shouldn't bill hourly if you do your own consulting work:
| if a client books you for a couple hours in a day,
| they've fucked the rest of the day for you.
|
| A 1 person-week engagement is pretty short. On a 1 p/w
| engagement, you'll have scoped back drastically what you
| can test; maybe one functional area of a smallish web
| app, or, every once in awhile, you'll get a big client
| that has the budget flexibility to do things like book
| "one week of just looking for SQLI and nothing else
| across all our internal web apps".
|
| The typical CRUD app for a small tech company would tend
| to come in between 3-4 person weeks. Sometimes, those
| engagements would have their last 2 days explicitly
| reserved for doc in the SOW. I felt like (still feel
| like) that's rustproofing; clients are paying for
| testing, not writing. Usually there's a couple days of
| "discovery" at the beginning. The rest of it is just
| testing.
|
| The typical order of a project with a _public_ report
| (those are pretty infrequent) is that the public report
| is done after the the original test is accepted. That 's
| in part because clients want to triage and remediate
| findings before they release a public report; you sort of
| _can 't_ drop a public report and the internal report at
| the same time. So public report writing shouldn't have
| much of an impact on the project delivery schedule,
| because it's not done at the same time.
| bink wrote:
| For sure, a short engagement of 1-2 days would be rare.
| We'd occasionally do them to get a foot in the door or as
| a follow-up for a regular customer. We'd still not want a
| junior engineer on them. You want to make as good of an
| impression as you can so you get the bigger contracts and
| you don't want someone there who isn't experienced
| communicating with clients.
| tptacek wrote:
| Yeah, that's probably true.
|
| But, per the thread, there are some special-case projects
| that are short and do take junior delivery staff; SARs,
| which are "pentest the documentation" projects meant to
| derive a thread model and inform a later, real, test (I
| don't like SARs and think they're a kind of consulting
| rustproofing, but people smarter than me disagree
| strongly with this), and, of course, retests.
| sillysaurusx wrote:
| It's an interesting tactic to set up a strawman and then
| beat it up. Where am I to start when the reply will
| mostly be "That's not true" or "I didn't say that"? This
| is devolving into being boring for the audience, but
| you're (for some reason) attacking my reputation. You're
| also backpedaling; I thought you were going to tell
| stories? I'd like to hear them. Or did you check with
| someone, and they said "Um, actually, Shawn wasn't that
| bad"?
|
| > I think these claims are pretty much risible. The
| report you're talking about is autogenerated from a bug
| tracker. The client sees your bugs. If you spend 50% of
| your time writing them, you're cheating the client.
|
| You keep saying the report is autogenerated because
| Litany existed. This is a bit like claiming that
| scientific papers are autogenerated because LaTeX exists.
| Yes, it does a lot of heavy lifting. No, it doesn't
| change the fact that you start with a template and then
| rework it into the proper form. As any grad student will
| tell you, that work takes a lot of time. My experience at
| Matasano was similar. I bet Drew, Andy, Damien, Dmitri,
| and a few other folks would at least say that reporting
| occupied a significant chunk of time.
|
| From where I'm sitting, the claim seems unremarkable.
| Look at how long this thread's security assessment is.
| Most of the words are boilerplate, but you can't simply
| ship boilerplate. And yeah, the reports went through
| multiple rounds of back-and-forth before they got
| shipped, during which every detail was combed over.
|
| > What I really think happens is that you misinterpret
| things people tell you and blow them into weird
| directions. Somebody told you that "the PDF is what
| clients are paying for". Well, no shit. The PDF is the
| only deliverable from the project. It's where the bugs
| are written down.
|
| It wasn't "someone." Damien was one of the most
| experienced pentesters at Matasano. He led several
| redteam projects, and taught me a lot of interesting
| tricks for sneaking your way into a network.
|
| > I wasn't there for whatever you got told, but it sounds
| to me like the subtext of it was probably "so it doesn't
| matter much what you do on a project as long as the
| client ends up with a report that they can use to claim
| they've had an assessment done". That's a cynical thing
| to say, but definitely a thing that gets said. It's also,
| strictly speaking, true.
|
| This is a strawman. What he was saying was that we need
| to do a good job on the report, in addition to the
| hacking. I don't know why you'd spin it into some cynical
| thing, but at least you're consistent.
|
| > The actual figure of merit from a software pentest is
| the list of bugs, full stop. Yes, the list is delivered
| in PDF. Don't let that confuse you.
|
| We can debate who's the one confused, but at this point
| it's pretty clear that our experiences were dramatically
| different. What possible benefit would it be to me to sit
| here and lie about it? Not only would you (and everyone
| else) call me out, but I'd have to keep making up more
| and more elaborate falsehoods.
|
| Sounds exhausting. I'm just reporting what I saw, and
| what I did.
|
| I don't see a productive way to continue this. Your
| position is that "nobody spends 50% of their time on
| reports." I'll concede that maybe it's closer to 40%. But
| it's certainly not 10%, or whatever small amount you're
| hinting at. And as you pointed out, my last month was
| filled with 100% documentation-writing.
|
| Let's agree to disagree and move on.
| tptacek wrote:
| I'm comfortable with what the thread says about how we
| came at this disagreement.
|
| As you've acknowledged upthread: your claim that
| documentation is 50% of the time on a pentest doesn't
| hold up. I believe it took 50% of your time, because you
| say it did, but like you said upthread: it's was an
| exceptional case. Maybe:
|
| 1. You spent more time on doc than was actually required
|
| 2. You worked a bunch of SARs, which are short doc-
| focused projects (and not pentests)
|
| 3. You were given a bunch of retests, which are short
| doc-focused projects; this almost fits, since it's some
| of the first work that's given to new team members after
| they're done shadowing on projects, except that it would
| be weird for there to be so much retest work that you
| could do them back-to-back for a long period of time
| (most clients don't purchase retests)
|
| 4. You worked ENPTs (external netpens), which are short
| projects and have a huge doc component of "fitting tool
| results into a report". But (a) your office didn't do a
| lot of those (Matasano in general didn't do a lot of
| netpen work) and (b) it wasn't low-seniority work; as I
| recall, the people who did netpens specialized in it.
|
| It could be some combination of all these factors.
|
| Meanwhile: Litany is nothing at all like LaTeX (though
| after I left LaTeX staged a coup and replaced the
| commercial PDF library it had been using). Litany is a
| bug tracker. You enter a finding title, a URL/location,
| and a description --- which Litany _autogenerates_ for
| common bug classes --- and when you 're done with the
| project you push a button and get a complete PDF. This
| would have been the primary way all reporting was done
| during your tenure.
|
| Interestingly, one of the few major disputes between
| Matasano and iSec Partners (the big two constituent firms
| in NCC US, iSec bigger by a factor of almost 2) was
| report generation. iSec's actually had a LaTeX template
| consultants would use to generate reports; they wrote
| them by hand. "I refuse to use Litany" (and lose control
| over my report formatting) was supposedly such a big
| thing that they had to rename Litany when they re-
| introduced it across the firm; it's now called Limb.
| Which is a tragedy, because Litany of Failure is one of
| the all-time great product names (credit, I believe, to
| Craig Brozefsky).
|
| Some of the people you've mentioned in this thread have
| reached out to me. I stand by literally everything I've
| said. It's OK to be wrong, and you are.
|
| The rest of it:
|
| d0136ada7858ad143675c2c3302db6343869190878191c8fc6a1547f0
| f23530e
|
| Just in case this ever comes up again, I'll at least
| commit to not having changed my story. :)
|
| Long story short: don't pay for software pentests that
| spend 50% of their time in doc. You're paying for the
| testing, not the report, even if the report is all you
| ultimately care about.
| yao420 wrote:
| I too worked for NCC Group. Before joining we spoke and you
| sent me a copy of the Web App Hackers Handbook and The Art
| of Software Security Assessments. I then worked for NCC the
| following 5 years.
|
| Yes report writing was the worst and half the week was
| easily dedicated to that. PM's and account managers pushed
| Friday readouts so it was a high priority so you would stay
| late to finish everything before you started another
| assessment the following Monday. Readouts for last week and
| kick off for a new project on the same day we're the worst
| but happened every so often.
|
| It was never find a bug, report and move on. It was
| dedicate the whole day to making it clear, presenting to
| the client and integrating feedback.
|
| Don't get me started on review. Every client doc had to
| have 2 reviews including from at least one senior and they
| put it off as much as possible.
| archivebot1 wrote:
| Izmaki wrote:
| This is my experience as well. The first paragraph that is.
| jasong wrote:
| Never did a security audit, but I did regulatory and
| financial audits of banks. Most of our work involved
| looking at last year's work, re-performing it and changing
| dates. Writing reports + financial statement notes followed
| a similar process.
|
| Exceptions are immediately escalated to the audit committee
| and sometimes end up as a small footnote in the public
| reports. Most of the time it says "we were unable to
| collect sufficient evidence" to provide assurance. Almost
| never "this was done wrong".
|
| It's interesting to see how the second report differed from
| their first assessment earlier in the year:
| https://research.nccgroup.com/2021/04/08/public-report-
| vpn-b...
|
| Most of the findings in the first report were "Fixed".
| tptacek wrote:
| This is a good call-out. The software security field uses
| the term "audit" in a deeply weird way. A real audit,
| like the kind done in a SOC2 assessment, really is about
| the report, and is more like 90% paperwork than 50%.
|
| Here we're talking about software pentesting, and
| consulting firms have historically used the word "audit"
| to elevate that work. But these engagements almost never
| follow the shape of a real audit; there's no spelled-out
| audit criteria, there's no reconciliation against
| previous results, and the focus of the engagement is
| almost purely the exceptions, with little to no
| documentation of the tests completed without findings.
| thaumasiotes wrote:
| You've noted elsewhere in the thread that pentesting has
| the concept of a "retest". A retest purely examines the
| findings of some original earlier report. Any other
| vulnerabilities are out of scope, no matter how serious.
|
| This seems like a better match to the regulatory audit
| model, but it's a bad match to the problem people would
| like to think of audits as solving, which is determining
| "are there any problems?".
|
| The normal pentesting use of "audit" draws on that
| intuitive meaning of the word; the concept is that you
| answer whether problems exist. But it's deeply weird
| anyway, because the answer can't be known, only guessed.
|
| I seem to remember PCI being called out as a very
| different report type from the usual, closer to the
| regulatory audit model at least in that the process was
| heavily regulated and old problems had to be fixed and
| retested. I never did a PCI report; don't hold me to
| anything.
| tptacek wrote:
| Yes, exactly. There is a weird conflict of interest thing
| happening with a lot of public-facing security assessment
| work. The client wants a clean bill of health. The
| delivery consultants can't honestly sell that, at least
| not without a huge project scope stretching into double-
| digit person/months. But the firm wants to sell
| engagements, and public reports are a condition of the
| engagement.
|
| So we have this phenomenon of "audit reports" that are
| really anything but that. Very few people in the industry
| know how to read them (for instance, how to locate and
| evaluate the scope of the project). But they're
| effectively used as seals of approval by clients. Which
| creates an even bigger incentive for firms to sell them,
| to the point where there are firms that almost specialize
| in doing them.
|
| PCI is closer to the audit model, and yet even less
| effective than the pentest model, because the
| standardized delivery model created a race to the bottom
| effect in the market.
|
| My oddball position on this stuff: firms shouldn't do
| external reports at all, and clients should just be able
| to post their internal reports.
| lucb1e wrote:
| > Nobody spends 50% of their time writing reports. If
| anything, it's the exact opposite
|
| At my employer we don't, but a friend said that 50% of
| their time at a big multinational who does pentesting and
| other auditing services was reporting and fighting the MS
| Word template. Apparently it's not that abnormal and hard
| enough to change that my friend would rather leave than
| work with the team to get it fixed.
|
| > This "multiple rounds of review" thing is news to me too.
|
| Depends on the definition, like do you count your teammates
| reading what you wrote on top of the boss checking off on
| the report? If yes, then we have multiple rounds of reviews
| for every report. If no, then we only have it for certain
| (not all) public reports. Again, not a weird thing.
|
| (To get a bit meta, your writing style across any security-
| related comments is a bit "this is definitively how it
| definitely is" whereas... well, in this case it's
| objectively not true. Some orgs do spend (waste)
| disproportionate amounts of time on reporting.)
|
| > the ordinary process for a project like this with a
| public report is, you deliver the whole project as if there
| wasn't a public report, with an internal report, and then
| the public report is based on the internal one (probably
| post-remediation)
|
| This is also not how we do it. We typically know beforehand
| if it is going to be public and the report that gets sent
| to the customer and released to the public is literally the
| same file. It's not that a second report gets written
| "based on" the real report. That's how we maintain
| integrity as a third-party auditor: we don't sugar-coat or
| alter our reports if it's going to be public. If desired,
| both for public and nonpublic, we can do a retest and add
| remediation status. What is different is that we triple
| check for typos, lines running out of the page, etc., and
| might be more verbose so a wider audience can understand
| the findings (some customers are very technical, others not
| at all; for a public report we mostly switch to non-
| technical-customer mode). Interesting to know that this is
| not, at least not ordinarily, the case at places you
| work(ed).
| tptacek wrote:
| I can't speak to the big multinational your friend works
| at, only NCC, and, I guess, a bunch of boutique-y firms
| that don't qualify as "big multinationals" the way NCC
| does. And, just generally: if you're contracting software
| pentests, you should _not_ be OK with the prospect of
| your consultants spending 50% of their time in doc. That
| isn 't a norm. You should expect most of the time to be
| spent looking for the bugs you're paying for them to
| find.
|
| _After your edit_
|
| With respect to your thing about how public reports work,
| I appreciate the intellectual honesty of the approach
| your firm takes, and I have real problems with public
| reports in general (something I've ranted about here
| before).
|
| But at a big US pentest firm I think it'd be hard to do
| public reports any other way. For one thing: clients
| routinely dispute findings, and they're often right (they
| know more about their own code than the testers do).
| You'd _have_ to let them see the report before you could
| write a version of the report for the public, so you can
| scrub out the findings that don 't pan out.
|
| Another obvious issue is that reports often include
| tentative findings, which need to be confirmed before the
| report is finalized.
|
| I'm interested, just as a security consulting nerd, in
| how your process handles these things. But past that, the
| question that started this thread was whether having
| Google breathing down your neck would make this a hard
| engagement to deliver, and I think the answer to that is
| almost certainly "no". If anything, Google wants good
| findings, and the stressful part of this engagement would
| be coming up with a report that never gets anything more
| than a sev:med.
|
| (I don't know, Google's a big company, I haven't done a
| software assessment in over a year, &c &c).
| lucb1e wrote:
| > I guess, a bunch of boutique-y firms that don't qualify
| as "big multinationals" the way NCC does
|
| NCC has on the order of 1% of the revenue of this
| "boutique-y firm". I'd mention the name but then I'd want
| to check with $friend first in case it would somehow make
| them identifiable.
|
| > you should _not_ be OK with the prospect of your
| consultants spending 50% of their time in doc
|
| Agreed on that
| tptacek wrote:
| Just so we're clear: I'm saying _my personal experience_
| extends only as far as a single org of NCC 's size, and
| then a variety of boutique-y firms of ~Matasano's size. I
| have no experience with Deloitte or PwC. I'm not calling
| your friend's firm a boutique; it's clearly not.
| lucb1e wrote:
| Ah, sorry that was a misunderstanding on my part!
| lucb1e wrote:
| (Sorry about the substantial edit after you apparently
| already read my comment. Most of the time people don't
| see it that fast ^^')
|
| > at a big US pentest firm I think it'd be hard to do
| public reports any other way [...] You'd _have_ to let
| [clients] see the report before you could write a version
| of the report for the public, so you can scrub out the
| findings that don 't pan out.
|
| We do let them see before, also because there is indeed
| time needed to roll out fixes, but we aren't often asked
| to make substantial changes (and if they ask, whether we
| can oblige depends). Sometimes we make a real mistake and
| then it just looks silly for both parties so that would
| be removed, but usually things are just factually
| correct. For example, we can't always know about
| backports when we see an old version (and PoCs / detailed
| info is notoriously missing from CVEs, one of my pet
| peeves), so we recommend they check that the thing is up
| to date and implement an update procedure if it's not.
| That advice is valid no matter the actual state. We can
| mark it as checked and OK as part of a retest (if we got
| ssh to see for ourselves, or we would write something
| like "$customer checked and says it was up to date" as
| the retest description alongside a 'resolved' marker).
|
| > Another obvious issue is that reports often include
| tentative findings,
|
| Is this like an int overflow that you're not yet sure is
| exploitable or so? Or do you have an example of this? I'm
| not familiar with the concept of a tentative finding
| unless it's like a little note "this search function at
| /search.php produces a weird error with input {{1+1}} but
| I haven't found a useful bug yet, todo investigate more".
|
| If it's the latter, we don't report those if we couldn't
| find anything clearly wrong with it. If it's just not
| following the spec (HTML entity encoding issues are a
| common example) but we don't find a security flaw, then
| it's an informational note at best. Maybe we should be
| more pushy with these in case it turns out to be
| exploitable.
|
| > the question that started this thread was whether
| having Google breathing down your neck would make this a
| hard engagement to deliver, and I think the answer to
| that is almost certainly "no".
|
| I mostly agree there. It's definitely a bit more work
| with our process because what the test team sends out for
| internal review is supposed to be basically ready to
| publish, so they'll do more double checking of the facts
| before git committing the finding and such. But at the
| same time, because the routine is almost identical for
| private and public reports, it's not very special even
| for big customers that people here would recognize.
|
| Is the engagement harder? Slightly, because you don't
| want to make the firm look stupid by doing something
| wrong. Does it qualify for the statement "hard to
| deliver"? Nah, I agree that this answer is "no".
|
| > the stressful part of this engagement would be coming
| up with a report that never gets anything more than a
| sev:med
|
| This. Or even nothing above 'low' if the attack surface
| is small and the devs are up to date on the relevant
| security best practices.
| jmclnx wrote:
| > "With VPN by Google One, we will never use the VPN connection
| to track, log, or *sell* your online activity.
|
| This can be expanded to: "And by using Google VPN, only Google
| can see your search history and WEB Sites you visit. Apple will
| not see anything they do not need to see." :)
___________________________________________________________________
(page generated 2022-12-11 23:00 UTC)