[HN Gopher] Apt Encounters of the Third Kind
       ___________________________________________________________________
        
       Apt Encounters of the Third Kind
        
       Author : lormayna
       Score  : 458 points
       Date   : 2021-03-26 13:09 UTC (9 hours ago)
        
 (HTM) web link (igor-blue.github.io)
 (TXT) w3m dump (igor-blue.github.io)
        
       | gue5t wrote:
       | Please change the posting title to match the article title and
       | disambiguate between APT (Advanced Persistent Threats, the
       | article subject) and Apt (the package manager).
        
         | jwilk wrote:
         | FWIW, the package manager is also spelled APT.
        
           | gue5t wrote:
           | You're right... what an annoying namespace collision. On the
           | other hand, stylizing software as Initial Caps is much more
           | acceptable than stylizing non-software acronyms that way, so
           | it would still be less misleading to change the
           | capitalization.
        
             | IgorPartola wrote:
             | Would you say that these things aren't aptly named?
        
               | diogenesjunior wrote:
               | ha
        
         | milliams wrote:
         | Especially where the article doesn't define it or even use the
         | term "APT" except in the title.
        
         | dopidopHN wrote:
         | Thanks, I don't work in security but I use APT a lot. I thought
         | it was a unfunny joke? Like ... APT provide some of those
         | packages? Ok. That make more sense.
         | 
         | The author did a good job at making that readable. Is it often
         | like that?
        
         | diogenesjunior wrote:
         | I thought we couldn't edit titles?
        
           | kencausey wrote:
           | Yes, the poster can for a limited time, 2 hours I think.
        
             | anamexis wrote:
             | Also the mods can and often do.
        
           | lormayna wrote:
           | Poster here. Do you think I need to edit the title? This
           | title was funny to me, but probably just because I am a
           | security guy and I know what is an APT.
        
             | airstrike wrote:
             | It should match the original, unless you have a strong
             | reason not to i.e. it breaks the guidelines somehow
             | 
             | https://news.ycombinator.com/newsguidelines.html
        
             | 8note wrote:
             | Switching Apt to APT would add lot of clarity while barely
             | changing the title
        
       | npsimons wrote:
       | This is the kind of content I come to HN for! I don't get to do a
       | lot of low level stuff these days, and my forensics skills are
       | almost non-existent, so it's really nice to see the process laid
       | out. Heck, just learning of binwalk and scapy (which I'd heard
       | of, but never looked into) was nice.
        
       | xmodem wrote:
       | This is truly the stuff of nightmares, and I'm definitely going
       | to review our CI/CD infrastructure with this in mind. I'm eagerly
       | awaiting learning what the initial attack vector was.
        
         | lovedswain wrote:
         | 9 times out of 10, through the front door. Some shit in a .doc,
         | .html or .pdf. The Google-China hack started with targetted
         | pdfs
        
           | ducktective wrote:
           | How such an attack is even possible? A bug in the
           | LibreOffice, browser, or Evince?
        
             | yjftsjthsd-h wrote:
             | PDF is a nightmare format, including such gems as
             | javascript IIRC; it's not surprising that it can be used to
             | make exploits in reader software.
        
               | ducktective wrote:
               | So the attacker has to have exploits in every pdf reader
               | app on linux? Since it is not Adobe only and there are
               | quite a few. Or maybe a common backend engine (mupdf and
               | popler)...
        
               | josephg wrote:
               | An attacker doesn't need every attack to work every time.
               | One breach is usually enough to get into your system, so
               | long as they can get access to the right machine.
               | 
               | I heard a story from years ago that security researchers
               | tried leaving USB thumb drives in various bank branches
               | to see what would happen. They put autorun scripts on the
               | drives so they would phone home when plugged in. Some 60%
               | of them were plugged in (mostly into bank computers).
        
               | yjftsjthsd-h wrote:
               | Yeah, I suspect that a rather lot of the options use the
               | same libraries;
               | https://en.wikipedia.org/wiki/Poppler_(software) claims
               | that poppler is used by Evince, LibreOffice 4.x, and
               | Okular (among others).
        
           | diarrhea wrote:
           | If people didn't allow macros in Excel, stayed in read-only
           | mode in Word and only opened sandboxed PDFs (convert to
           | images in sandbox, OCR result, stitch back together), we
           | would see a sharp decline in successful breaches. But that
           | would be boring.
        
             | pitaj wrote:
             | I think opening all PDFs in a browser would be good
             | enough(tm) as browser sandboxes are about as secure as
             | sandboxing gets.
        
       | alfiedotwtf wrote:
       | Started slow to get me hooked, then bam... slapped you in the
       | face with a wild ride.
       | 
       | Reading this, I know of places that have no hope against someone
       | half as decent as this APT. The internet is a scary place
        
       | mwcampbell wrote:
       | I'm trying to find the lesson in here about how to prevent this
       | kind of incident in the first place. The nearest I can find is:
       | don't build any production binaries on your personal machine.
        
         | jeffrallen wrote:
         | The lesson is in the essay from James Mickens, above.
        
           | mwcampbell wrote:
           | But isn't that just defeatist? Can't we continue to ratchet
           | up our defenses?
        
         | rcxdude wrote:
         | Reproducible builds can go a long way, along with a diverse set
         | of build servers which are automatically compared. Whether you
         | use your personal machine or a CI system there's still the risk
         | of it being compromised (though your personal machine is
         | probably at a little more risk of that since personal machines
         | tend to have a lot more software running on them than CI
         | systems or production machines).
        
         | stickfigure wrote:
         | Use a PaaS like Heroku or Google App Engine, with builds
         | deployed from CI. All the infrastructure-level attack surface
         | is defended by professionals who at least have a fighting
         | chance.
         | 
         | I feel reasonably competent at defending my code from
         | attackers. The stuff that runs underneath it, no way.
        
         | Nextgrid wrote:
         | This is why I always insist on branches being protected at the
         | VCS server level so that _no_ code can sneak in without other
         | 's approval - the idea is that even if your machine is
         | compromised, the worst it can do is commit malicious code to a
         | branch and open a PR where it'll get caught during code review,
         | as opposed to sneakily (force?) pushing itself to master.
        
           | dathinab wrote:
           | In this case no CI was involved so that wouldn't have helped.
           | 
           | (The CI was not compromised but a dev laptop which was used
           | to manually build+deploy the kernel, without any CI
           | involved).
           | 
           | Through generally I agree with you.
        
         | h2odragon wrote:
         | I'm _paranoid_ , and I'd have considered the efforts described
         | here to be pretty secure. I'll say the only counter to _this_
         | grade of threat is constant monitoring, by a varied crew of
         | attentive, inventive, and interested people. Even then, there
         | 's probably going to be a lot of luck needed.
        
           | zozbot234 wrote:
           | One sensible mitigation to this grade of threat; avoid
           | running Windows, even as a VM host as the dev did. It's a
           | dumpster fire.
        
             | xmodem wrote:
             | I think you may have misinterpreted that part of the post -
             | my understanding is that the Linux laptop that was being
             | used was compromised, and there was a 3 month gap when that
             | developer switched to a Windows machine before that became
             | compromised too. Specifically it would be fascinating to
             | learn whether the Windows host was compromised or if it was
             | only the Linux VM.
        
               | ducktective wrote:
               | > ...It looks as if it took the attackers three months to
               | gain access back into the box and into the VM build...
               | 
               | How the attackers were able to gain access _again_ after
               | the developer used a VM in Windows? My guesses:
               | 
               | - The developer machine was compromised in a deeper level
               | (rootkit?)
               | 
               | - The developer installs a particular application in each
               | Linux box
               | 
               | - There is a bug in an upstream distro
        
               | dathinab wrote:
               | > The developer machine was compromised in a deeper level
               | (rootkit?)
               | 
               | Unlikely that would not have taken 3 month.
               | 
               | > The developer installs a particular application in each
               | Linux box
               | 
               | Possible, but also unlikely, as long as the vm wasn't
               | used for other things this also wouldn't have taken 3
               | month.
               | 
               | > The developer installs a particular application in each
               | Linux box
               | 
               | There probably is, but it probably has nothing to do with
               | this exploit. For the same reasons as mentioned above.
               | 
               | My guess is that it was a targeted attack against that
               | developer and there is a good chance the first attack and
               | the second attack used different attack vectors hence the
               | 3 month gap.
        
           | kiliancs wrote:
           | Traffic analysis and monitoring will detect detect signs of
           | intrusion almost in real time but also exfiltration. The
           | network never lies.
        
             | marcosdumay wrote:
             | > The network never lies.
             | 
             | Steganography begs to differ.
             | 
             | How much free entropy do you have on your network traffic?
             | 
             | EDIT: Corrected. Thanks cuu508.
        
               | cuu508 wrote:
               | *steganography
        
             | h2odragon wrote:
             | The kind of eyes that can spot the hinky pattern while
             | watching that monitor are the vital ingredient, and thats
             | not something i can quantify. Or even articulate well.
        
         | marcosdumay wrote:
         | Hum... On what machine do you build them?
         | 
         | That can not be the right lesson, because there's no inherent
         | reason "personal machine" is any less safe than "building
         | cluster" or whatever you have around. Yes, on practice it often
         | is less secure to a degree, so it's not a useless rule, but
         | it's not a solution either.
         | 
         | If it's solved some way, it's by reproducible builds and
         | automatic binary verification. People are doing a lot of work
         | on the first, but I think we'll need both.
        
         | dathinab wrote:
         | I would go further and say:
         | 
         | "Developer systems are often the weakest link."
         | 
         | (Assuming that the system on itself is designed with security
         | in mind.)
         | 
         | The reason is manifold but include:
         | 
         | - attacks against developer systems are often not or less
         | considered in security planing
         | 
         | - many of the technique you can use to harden a server conflict
         | with development workflows
         | 
         | - there are a lot of tools you likely run on dev systems which
         | add a large (supply chain) attack surface (you can avoid this
         | by allways running _everything_ in a container, including you
         | language server /core of your ides auto completion features).
         | 
         | Some examples:
         | 
         | - docker groub member having pseudo root access
         | 
         | - dev user has sudo rights so key logger can gain root access
         | 
         | - build scripts of more or less any build tool (e.g. npm, maven
         | plugins, etc.)
         | 
         | - locking down code execution on writable hard drives not
         | feasible (or bypassed by python,node,java,bash).
         | 
         | - various selinux options messing up dev or debug tools
         | 
         | - various kernel hardening flags preventing certain debugging
         | tools/approaches
         | 
         | - preventing LD_PRELOAD braking applications and/or test suites
         | 
         | ...
        
           | tetha wrote:
           | Interestingly, this is water on mills we are currently
           | thinking about. We're in the process of scaling up security
           | and compliance procedures, so we have a lot of things on the
           | table, like segregation of duties, privileged access
           | workstations, build and approval processes.
           | 
           | Interestingly, the way with the least overall headaches is to
           | fully de-privilege all systems humans have access to during
           | regular, non-emergency situations. One of those principles
           | would be that software compiled on a workstation
           | automatically disqualifies from deployment, and no human
           | should even be able to deploy something into a repository the
           | infra can deploy from.
           | 
           | Maybe I should even push container-based builds further and
           | put up a possible project to just destroy and rebuild CI
           | workers every 24 hours. But that will make a lot of build
           | engineers sad.
           | 
           | Do note that "least headaches" does not mean "easy".
        
           | dane-pgp wrote:
           | I think a big difference between build machines and dev
           | machines, at least in principle, is that you can lock down
           | the network access of the build machine, whereas developers
           | are going to want to access arbitrary sites on the internet.
           | 
           | A build machine may need to download software dependencies,
           | but ideally those would come from an internal mirror/cache of
           | packages, which should be not just more secure but also
           | quicker and more resilient to network failures.
        
         | kjjjjjjjjjjjjjj wrote:
         | Build everything on a secured CI/CD system, keep things
         | patched, monitor traffic egress especially with PII, manual
         | review of code changes, especially for sensitive things
        
         | benlivengood wrote:
         | If you use cloud services that offer automated builds you can
         | push the trust onto the provider by building things in a
         | standard (docker/ami) image with scripts in the same repository
         | as the code, cloned directly to the build environment.
         | 
         | If you roll your own build environment then automate the build
         | process for it and recreate it from scratch fairly often.
         | Reinstall the OS from a trusted image, only install the build
         | tools, generate new ssh keys that only belong to the build
         | environment each time, and if the build is automated enough
         | just delete the ssh keys after it's running. Rebuild it again
         | if you need access for some reason. Don't run anything but the
         | builds on the build machines to reduce the attack surface, and
         | make it as self contained as possible, e.g. pull from git,
         | build, sign, upload to a repository. The repository should only
         | have write access from the build server. Verify signatures
         | before installing/running binaries.
        
           | tetha wrote:
           | > If you use cloud services that offer automated builds you
           | can push the trust onto the provider by building things in a
           | standard (docker/ami) image with scripts in the same
           | repository as the code, cloned directly to the build
           | environment.
           | 
           | And I guess, for those super-critical builds, don't rely on
           | anything but the distro repos or upstream downloads for
           | tooling?
           | 
           | Because if you deploy your own build tools from your own
           | infra, you are at risk to taint the chain of trust with
           | binaries from your own tainted infra again. I'm aware of the
           | trusting trust issue, but compromising the signed gcc copy in
           | debians repositories would be much harder than some copy of a
           | proprietary compiler in my own (possibly compromised) binary
           | repository.
        
             | benlivengood wrote:
             | > And I guess, for those super-critical builds, don't rely
             | on anything but the distro repos or upstream downloads for
             | tooling?
             | 
             | You can build more tooling by building it in the trusted
             | build environment using trusted tools. Not everything has
             | to be a distro package, but the provenance of each binary
             | needs to be verifiable. That can include building your own
             | custom tools from a particular commit hash that you trust.
        
         | [deleted]
        
       | h2odragon wrote:
       | CNA?
       | 
       | > On March 21, 2021, CNA determined that it sustained a
       | sophisticated cybersecurity attack. The attack caused a network
       | disruption and impacted certain CNA systems, including corporate
       | email. Upon learning of the incident, we immediately engaged a
       | team of third-party forensic experts to investigate and determine
       | the full scope of this incident, which is ongoing.
       | 
       | + [CNA suffers sophisticated cybersecurity
       | attack](https://www.cna.com/)
        
         | FDSGSG wrote:
         | No, CNA was hit by ransomware.
        
       | motohagiography wrote:
       | Superb work. The "who" of attribution is more likely related to
       | the actual PII they were after than any signature you'll get in
       | the code. Seems like a lot of effort and risk of their malware
       | being discovered for PII instead of being an injection point into
       | those users machines. I rarely hear security people talk about
       | _why_ a system was targeted, and once you have that, you can know
       | what to look for, inject canaries to test etc.
        
         | afrcnc wrote:
         | From Twitter chatter, this appears to be Chinese APT malware,
         | something related to PlugX
        
           | kjjjjjjjjjjjjjj wrote:
           | >Chinese APT
           | 
           | Wow, surprising!
        
           | mwcampbell wrote:
           | > Chinese APT malware,
           | 
           | Why is it necessary to point out the foreign origin? Doesn't
           | that just encourage our innate xenophobia?
        
             | fouric wrote:
             | It should be pretty easy for someone to differentiate
             | between the Chinese people and the Chinese government.
             | 
             | Meanwhile, can you prove that this "innate xenophobia" is
             | present in every human to an extent that it's actually
             | relevant, and that this particular instance of suggesting
             | that the malware is Chinese in origin meaningfully
             | exacerbates it?
             | 
             | Moreover, China is a geopolitical rival to the United
             | States, India, and other countries that constitute a
             | majority of HN readers. Information like this is
             | interesting from that viewpoint.
        
             | renewiltord wrote:
             | My interpretation, not knowing anything about the field, is
             | that this is a nation state actor or sponsored by such.
        
             | motohagiography wrote:
             | Threat modelling to develop useful risk mitigation requires
             | that system owners essentially do a
             | means/motive/opportunity test on the valuable data they
             | have. The motive piece includes nation states as actors,
             | and that matters in terms of how much recourse you are
             | going to have against an attacker.
             | 
             | However, I'd propose a new convention that any unattributed
             | attacks and example threat scenarios of nation states
             | should use Canada as the default threat actor, because
             | nobody would believe it or be offended.
        
             | mc32 wrote:
             | If it were Russian, American or Israeli would you have the
             | same reservations?
        
               | vxNsr wrote:
               | Lol no s/he likely wouldn't but s/he'll argue it's
               | different bec Trump didn't make any negative statements
               | about them so it's impossible to be xenophobic against
               | them.
               | 
               | To prove my point s/he had no problem with the top level
               | comment 6 hrs ago "mossad gonna mossad"
        
       | ericbarrett wrote:
       | One of the most fascinating breach analyses I've ever read.
       | 
       | Reading between the lines, I sense the client didn't 100% trust
       | Mr. Bogdanov in the beginning, and certainly knew there was
       | exfiltration of some kind. Perhaps they had done a quick check of
       | the same stats they guided the author toward. "Check for extra
       | bits" seems like a great place to start if you don't know exactly
       | what you're looking for.
       | 
       | Their front-end architecture seemed quite locked down and
       | security-conscious: just a kernel + Go binary running as init,
       | plain ol' NFS for config files, firewalls everywhere, bastion
       | hosts for internal networks, etc. So already the client must have
       | suspected the attack was of significant sophistication. Who was
       | better equipped to do this than their brilliant annual security
       | consultant?
       | 
       | Which is completely understandable to me, as this hack is already
       | of such unbelievable sophistication that resembles a Neil
       | Stephenson plot. Since the author did not actually commit the
       | crime, and in fact _is_ a brilliant security researcher,
       | everything worked out.
        
         | eeZah7Ux wrote:
         | > just a kernel + Go binary running as init
         | 
         | This is hardly reducing the attack surface compared to a good
         | distro with the usual userspace.
         | 
         | It's been decades since attackers relied on a shell, or unix
         | tools in general, or on being to write to disk and so on: it's
         | risky and ineffective.
         | 
         | Many attack tools run arbitrary code inside the same process
         | that has been breached and extract data from its memory.
         | 
         | They don't try to snoop around or write to disk and so on.
         | Rather, move to another host.
         | 
         | The only good mitigation is to split your own application in
         | multiple processes based on the type of risk and sandbox each
         | of them accordingly.
        
           | ericbarrett wrote:
           | > This is hardly reducing the attack surface compared to a
           | good distro with the usual userspace.
           | 
           | Run `tcpdump -n 'tcp and port 80'` on your frontend host and
           | you'll still see PHP exploit attempts from 15 years ago. Not
           | every ghost who knocks is an APT. A singleton Go binary
           | running on a Linux kernel with no local storage is
           | objectively a smaller attack surface than a service running
           | in a container with /bin/sh, running on a vhost with a full
           | OS, running on a physical host with thousands of sleeping VMs
           | --the state of many, many websites and APIs today.
        
       | ivanstojic wrote:
       | I think the old "Mossad is gonna Mossad" thing is still true.
       | Good security practices are mandatory, and will keep you safe 99%
       | of the time.
       | 
       | But when you have what appear to be state level actors using 0
       | day exploits... you will not stop them.
        
         | satyanash wrote:
         | Thanks for making me look up "Mossad is gonna Mossad" ->
         | Schneier -> Mickens' essay titled "This World of Ours".
         | 
         | https://www.usenix.org/system/files/1401_08-12_mickens.pdf
        
           | jeffrallen wrote:
           | Thank you for this. Helps put my career choices into
           | perspective. (I just quit security work to be a stay at home
           | dad.)
        
           | GSGBen wrote:
           | Thanks, this is such good writing. Reminds me a little of
           | Douglas Adams.
        
             | sophacles wrote:
             | Both are good authors. If you like the humor aspect that's
             | mostly Mickens - one of my favorites from him:
             | https://www.usenix.org/system/files/1311_05-08_mickens.pdf
        
           | Arrath wrote:
           | That was very entertaining, thank you.
        
         | justupvoting wrote:
         | No 0-day here, more of a supply chain attack, but your point
         | stands. This actor was _determined_
        
         | kjjjjjjjjjjjjjj wrote:
         | More like Chinese state sponsored hackers
        
       | cmeacham98 wrote:
       | Conspiracy theory: the fact the POC insisted on the writer
       | checking out the traffic suggests they knew about (or were
       | suspicious of) the fact that PII was being leaked.
        
         | clankyclanker wrote:
         | Probably, but is that a conspiracy theory so much as an
         | insurance policy? Being able to competently complete that sort
         | of nightmare investigation is probably why the investigator was
         | re-hired annually.
         | 
         | A packet capture of the config files would show something was
         | up to anyone suspicious, but knowing what to do about it is a
         | completely different story.
        
           | cmeacham98 wrote:
           | The 'conspiracy' part of my conspiracy theory is not that
           | they hired a security consultant, but that they explicitly
           | guided him to the exact hardware[1] with the correct metric
           | to detect it[2] asking him to test for a surprisingly
           | accurate hypothetical[3], even going so far as to temporarily
           | deny the suggestion of the person they're paying to do this
           | work[4]. This is weirdly specific assuming they had no
           | knowledge of the compromise.
           | 
           | Of course, I have no non-circumstantial evidence and this
           | could all be a coincidence, which is why my comment is
           | prefixed with "conspiracy theory".
           | 
           | 1: "However, he asked me to first look at their cluster of
           | reverse gateways / load balancers"
           | 
           | 2: Would have likely been less likely to find the issue with
           | active analysis given the self destruct feature
           | 
           | 3: "Specifically he wanted to know if I could develop a
           | methodology for testing if an attacker has gained access to
           | the gateways and is trying to access PII"
           | 
           | 4: "I couldn't SSH into the host (no SSH), so I figured we
           | will have to add some kind of instrumentation to the GO app.
           | Klaus still insisted I start by looking at the traffic before
           | (red) and after the GW (green)"
        
             | bombcar wrote:
             | It sounded to me like they had a suspicion and specifically
             | wanted the contractor to use his expertise in a limited way
             | that would catch if the suspicion was right.
             | 
             | Perhaps they had noticed the programs restarting and when
             | trying to debug triggered it.
        
               | [deleted]
        
             | marcosdumay wrote:
             | #4 is a reasonable request. If the client wants to verify
             | the lower level ops instead of higher level application and
             | deployment, the instrumentation would be counterproductive.
             | That could happen if he was thinking something on the lines
             | of "there's a guy here that compiles his own kernel on a
             | personal laptop, I wonder what impact this has".
             | 
             | The other ones could be explained by him being afraid of
             | leaking PII, and most PII being on that system.
        
             | tln wrote:
             | Perhaps "the guy responsible for building the kernel"
             | noticed his laptop was compromised. Then they'd know of a
             | theoretical possibility of a compromise.
             | 
             | Not wanting to instrument the Go app could be an
             | operational concern.
        
             | cham99 wrote:
             | Sometimes commercial companies get a tip from intelligence
             | agencies:
             | 
             | "Your <reverse gateway> devices are compromised and leak
             | PII." Nothing more.
        
         | mzs wrote:
         | from Igor:
         | 
         | >I think he had some suspicions, but he is denying that
         | vehemently ;)
         | 
         | https://twitter.com/IgorBog61650384/status/13753134251323146...
        
         | sloshnmosh wrote:
         | I was thinking the same.
        
       | anticristi wrote:
       | Call me naive, but who is such a hot target to warrant so much
       | effort to exfiltrate PII? Defense? FinTech? Government?
        
         | TameAntelope wrote:
         | If they can get to you, they can get to your clients, who have
         | clients they're now better able to get to, etc...
         | 
         | HVAC company working in a building where a subcontractor of a
         | major financial firm has an office, for a random example...
        
         | perlgeek wrote:
         | Hotel or b2b travel agencies also have PII that can be _very_
         | useful to intelligence agencies.
        
         | h2odragon wrote:
         | Who is such a hot target _and_ can take such an independent
         | attitude, even to allowing this to be published? If this had
         | been a bank, they 'd have had to report to regulators and
         | likely we'd have heard none of these details for years if ever.
         | Same for most anything else big enough to be a target i can
         | think of offhand.
        
           | dathinab wrote:
           | Idk. while banks have to report on this they are (as far as I
           | know) still free to publicize details.
           | 
           | We normally don't hear about this things not because they
           | can't speak about it but because they don't want to speak
           | about it (bad press).
           | 
           | My guess is that it's a company which takes security
           | relatively serious, but isn't necessary very big.
           | 
           | > hot target [..] else big enough to be a target
           | 
           | I don't thing you need to be that big to be a valid target
           | for a attack of this kind, neither do I think this attack is
           | on a level where "only the most experienced/best hackers"
           | could have pulled it of.
           | 
           | I mean we don't know how the dev laptop was infected but
           | given that it took them 3 month to reinfect it I would say it
           | most likely wasn't a state actor or similar.
        
         | [deleted]
        
         | bsamuels wrote:
         | Based on how outlandish the GW setup is, this is definitely a
         | bank.
         | 
         | It could conceivably belong to a defense organization, but if
         | it did, they wouldn't be able to write up a blog about their
         | findings.
        
         | draugadrotten wrote:
         | I'd add medical to that list. Vaccine test results are hot
         | stuff.
        
           | daveslash wrote:
           | I think you're right that it's medical. The author calls out
           | PII was the target. Sure, there's PII in
           | Defense/Fintech/Government, but it's probably not the target
           | in those sectors and PII doesn't have the same spotlight on
           | it as in the Medical world (e.g. HIPPA & GDPR).
        
             | dane-pgp wrote:
             | Are you saying that, for example, the addresses of military
             | generals and spies are less of a target for hackers than
             | the addresses of medical patients? While there are laws to
             | protect medical information, I think all governments care
             | more about protecting national security information.
        
               | jldugger wrote:
               | > the addresses of military generals and spies are less
               | of a target for hackers than the addresses of medical
               | patients?
               | 
               | Why not both? Think how valuable the medical information
               | of military staff would be as a source of coersive power.
        
               | daveslash wrote:
               | Ah, good point! No, I was not saying that at all, and
               | thank you for pointing that out.
               | 
               | When I was thinking of "defense", I was thinking of the
               | defense contractors who are designing/building things
               | like the next-gen weapons, radar, vehicles, and the like.
               | In that context, when it comes to what they can
               | exfiltrate, I think attackers probably prioritize the
               | details & designs over PII. Just a guess though.
        
           | clankyclanker wrote:
           | Not just vaccines, but basically all your data, including
           | billing and disease history. Perfect for both scamming and
           | extortion.
           | 
           | Keep in mind that you actually _want_ your medical provider
           | to have that data, so they can treat you with respect to your
           | medical history, without killing you in the process.
        
             | anticristi wrote:
             | True. However, reading between the lines, the exfiltration
             | "project" was targeted (i.e. one-off), skilled and long. I
             | would put the cost anywhere between 1 megabuck and 10
             | megabucks. Given risks and dubious monetization, I would
             | assume the "sponsor" demands at least a 10x ROI.
             | 
             | Is medical data really that valuable?
        
               | webnrrd2k wrote:
               | How about psychiatric data from the area around
               | Washington DC? Hospitals/practices that are frequented by
               | New York CEO-types? I can picture that being quite
               | valuable to the right parties.
        
       | sloshnmosh wrote:
       | Wow! What an amazing write-up and find!
       | 
       | It's also amazing that they noticed the subtle difference in the
       | NFS packet capture.
       | 
       | I can't wait for the rest to be published.
       | 
       | Bookmarked
        
       ___________________________________________________________________
       (page generated 2021-03-26 23:00 UTC)