[HN Gopher] New malware found on 30k Macs has security pros stumped
       ___________________________________________________________________
        
       New malware found on 30k Macs has security pros stumped
        
       Author : furcyd
       Score  : 289 points
       Date   : 2021-02-20 15:25 UTC (7 hours ago)
        
 (HTM) web link (arstechnica.com)
 (TXT) w3m dump (arstechnica.com)
        
       | andor wrote:
       | That's why I'm always hesitant executing .pkg installers.
       | Luckily, the .app in most .pkg files can simply be extracted on
       | the command line using xar.
        
         | 1over137 wrote:
         | >Luckily, the .app in most .pkg files can simply be extracted
         | on the command line using xar
         | 
         | Or for a GUI option: Pacifist from https://www.charlessoft.com/
        
         | choeger wrote:
         | Maybe this is a stupid question, but what makes you trust the
         | app when you mistrust the installer? Or mistrust the installer
         | when you trust the app? Shouldn't both come from the same more
         | or less trustworthy source?
        
           | bilkow wrote:
           | The installer needs root (admin) permission, which may be
           | abused.
        
             | pjmlp wrote:
             | Having access to $HOME is already gone enough to produce
             | irrecoverable damage, specially if there is Internet
             | connection.
        
               | lapcatsoftware wrote:
               | As far as "malware" is concerned, there's no reason to
               | trust an app over a pkg. Either one is going to get you.
               | But there are legit companies that definitely
               | "overpackage" their software. For example, Microsoft
               | distributes Edge for Mac with a pkg installer, even
               | though Edge is based on Chromium, and Google just
               | distributes Chrome as a drag and drop app in a dmg. But
               | you can extract the Edge app out of the pkg and just
               | install that, skipping whatever junk Microsoft decides to
               | do in their installer scripts.
               | 
               | I trust Microsoft not to be literal malware, but I don't
               | trust Microsoft to avoid doing stupid unnecessary crap to
               | your Mac.
        
               | forgotmypw17 wrote:
               | I would consider mandatory "telemetry" to be malware-
               | ish...
        
               | 2cb wrote:
               | To add to blub's response, macOS operates on a
               | permissions system similar to iOS. A new application has
               | to ask permission the first time it tries to access
               | anything in your home folder.
               | 
               | Moreover, it is also quite granular. You can allow access
               | to "Downloads" but still have to grant permission for it
               | to access "Documents" later.
        
               | pjmlp wrote:
               | Only since Catalina.
        
               | blub wrote:
               | Modern macOS doesn't give access to specific home
               | subfolders to apps.
        
               | pjmlp wrote:
               | Which not everyone is using, this is only enabled since
               | Catalina.
        
           | KarlKemp wrote:
           | I believe the .pkg installer usually asks for a password and
           | runs privileged, while moving the app only uses root
           | privilege to move the bundle, but never executes any external
           | code it's privileges?
           | 
           | I'm not sure how relevant the difference is considering most
           | valuable data is owned by the user, but also because MacOS
           | has evolved the security model well beyond the UnIX standard.
        
         | aequitas wrote:
         | A .app can install a persistent service or alike just as easily
         | as a .pkg. Unless the .pkg is from a untrusted source but the
         | .app is signed. But then again why even install from an
         | untrusted source without signature.
        
           | bilkow wrote:
           | The installer asks for root permission and (I believe) may
           | abuse it, being able to modify system files for example.
           | 
           | I also chown + chmod LanchAgents and LauchDaemons in both
           | libraries so that only root may write to it.
        
         | Toutouxc wrote:
         | I like .app a bit more than .pkg, but have no idea whether a
         | .pkg installer without admin rights can actually do anything
         | more than a bunch of shell scripts before and after the
         | installation.
         | 
         | Can you link me to an article or a piece of documentation that
         | would explain it to a non-mac-developer?
        
         | 2cb wrote:
         | There's also a very easy attack vector with pkg installs where
         | an installer can "run software to ensure it's compatible with
         | this machine" that's been there since forever. Not sure if it
         | was removed in Big Sur or not but I hope so.
         | 
         | Zoom used that hole to gain admin permissions and install
         | itself before the user even completed the installation process
         | if the user was admin[1] (and we know most people use admin
         | accounts as their main ones). I'm sure plenty of other malware
         | has done this as well.
         | 
         | If this was simply delivered as a trojan with one of those fake
         | "Flash Player needs updating" type popups, it could have very
         | well abused that.
         | 
         | If it's installing without user interaction the attack vector
         | is a far more advanced 0day exploit chain.
         | 
         | I'll be very interested to find out.
         | 
         | I am also curious about how they can get away with using AWS
         | and Akamai as C&C. Surely now this malware has been found,
         | those providers will just shut down the accounts being used?
         | They'll also have some kind of trail towards whoever's behind
         | it, it's not like AWS takes payment in crypto.
         | 
         | [1] https://twitter.com/c1truz_/status/1244737672930824193
        
           | NateEag wrote:
           | Speculation, but I'd not be at all surprised if the C&C
           | servers are compromised boxes that have other uses.
        
         | lapcatsoftware wrote:
         | A really great app for this is Suspicious Package:
         | 
         | https://www.mothersruin.com/software/SuspiciousPackage/
        
       | soheil wrote:
       | I see people mentioning ~/Library/LaunchAgents to look for
       | suspicious apps. I should mention there are at least 4 other
       | places where a launch agent could start on OSX:
       | 
       | ~/Library/LaunchDaemons ~/Library/LaunchAgents
       | /Library/LaunchDaemons /Library/LaunchAgents
       | 
       | Not to mention Login Items under System Settings.
       | 
       | Finally, I hope it's obvious if you're infected for all you know
       | all your legitimate looking launch agents could be infected and
       | secretly run the malware in a background process upon execution.
        
       | Geee wrote:
       | It's probably North Korea trying to extract some Bitcoin. All
       | their computer science resources are now focused on this 24/7. So
       | be careful.
        
         | ok123456 wrote:
         | It's probably the CIA/Israel trying to do another Stuxnet. All
         | their computer science resources are now focused on this 24/7.
         | So be careful.
        
           | [deleted]
        
       | theamk wrote:
       | Tangentially related to this post: I am surprised at the
       | infosec's love of file checksums, like MD5 here. There are
       | endpoint tools which collect MD5's, network scanners which
       | collect MD5s, and I have heard about actual infosec directors
       | who, upon seeing a blog post like that, would carefully type the
       | hashes in their endpoint management interface to make sure their
       | org is not infected.
       | 
       | How on earth can this be effective? It is trivial to create
       | customized binaries in the installer, so no two users have the
       | same file checksum. With a bit of work, one can rig their
       | download server so each user has a completely different installer
       | checksum as well.
       | 
       | I remember reading about polymorphic viruses back in the 1990's -
       | not only those had different checksums for each computer, they
       | also had no long-ish substrings common between two versions,
       | making signature analysis ineffective. Did malware authors just
       | forgot about all those and went back to trivial "everyone gets
       | the same image" strategy?
        
         | lawl wrote:
         | > How on earth can this be effective?
         | 
         | It's not.
         | 
         | > Did malware authors just forgot about all those and went back
         | to trivial "everyone gets the same image" strategy?
         | 
         | No. But if you make your thing as polymorphic as possible and
         | deliver that automated, i take 1000 samples and look what
         | doesn't change to make a signature.
         | 
         | Also these things exist as webservices these days and criminals
         | pay per use to generate fresh undetected executables.
         | 
         | Just makes more sense to only release a new binary once it had
         | been detected, even if it's just a button click and could be
         | easily automated.
        
           | theamk wrote:
           | Why not both? Trivial change (like embedded timestamp in a
           | file) for every user, and advanced polymorphism for the times
           | when the malware is detected.
           | 
           | I bet most researchers have tools to bin files together
           | automatically based on the same MD5. Why make their life
           | easier when this is so easy to defeat?
        
             | tgsovlerkhgsel wrote:
             | Because many AV systems assume that any unknown file
             | (unknown to their cloud service) is probably doing this,
             | and thus probably malware.
        
           | MertsA wrote:
           | >I take 1000 samples and look what doesn't change to make a
           | signature.
           | 
           | There's fully metamorphic viruses where even the code
           | mutating part is changed. Algorithmically it's equivalent but
           | there's no "static" portion of the virus.
        
         | blakejustblake wrote:
         | The infosec community at large is well aware of how unreliable
         | just using md5 checksums to identify malware is. If anything it
         | is the absolute first line of defense for identifying malware,
         | in that it is easy to implement quickly and has a decent enough
         | chance of filtering out low hanging fruit. The biggest use for
         | the checksums between malware researchers is for identifying if
         | they have the same strain of malware as someone else.
         | Identification is mostly not based on checksums, but rather
         | things like YARA rules where different identifying factors of
         | malware are outlined to be compared against binaries. This
         | isn't foolproof either, but there is a rather large ecosystem
         | of malware researchers out there constantly taking samples and
         | releasing rules. I follow a lot of these folks on Twitter and
         | the majority of what they post are their findings on the
         | bajillionth strain of whatever malware is in vogue at the
         | moment. This sort of stuff is going to catch the majority of
         | what will be coming at most people and anything that slips by
         | the first lines of detection usually gets picked up somewhere
         | along the way and passed on to researchers who do an
         | exceptional job of reversing and identifying new malware or
         | strains of old ones. But of course the reliability of that
         | whole ecosystem depends on sensible organization security
         | policy to start with.
         | 
         | In short, md5 sums and signatures are there to protect against
         | the low hanging fruit, spray and pray type malware that's
         | pretty common. If someone wants to target you with uniquely
         | signatured malware they can. Identifying it isn't going to be
         | what stops it, but proper opsec can.
        
           | theamk wrote:
           | And that's what I don't understand! You say it "has a decent
           | enough chance of filtering", and I believe you -- but this
           | just seems so strange.
           | 
           | It seems to me like it trivial to create a webserver which
           | says "serve the same binary, but put a random ASCII string in
           | bytes 40-48". Or make malware installer which says, "write
           | out the executable file to disk, but put a random value in
           | bytes 80-88". Sure, it won't help against good YARA rule, but
           | it seems really easy to do, and it will frustrate
           | researchers, and even defeat some endpoint protection
           | software, like [0] and [1].
           | 
           | [0] https://help.symantec.com/cs/ATP_3.2/ATP/v106632175_v1273
           | 003...
           | 
           | [1] https://docs.mcafee.com/bundle/network-security-
           | platform-9.2...
        
             | lamontcg wrote:
             | Its like scam/phishing e-mails with typos in them. Most
             | widescale hacking is lowest effort looking for lowest
             | hanging fruit. If you hack enough servers for your purposes
             | without worrying about checksum randomization then worrying
             | about it is just wasted effort. And you want targets with
             | excessively shitty security postures or else you might
             | actually get tracked down and busted.
        
             | Leherenn wrote:
             | I think, like with many things, basic steps are not taken,
             | through laziness, carelessness or ignorance.
        
             | jjeaff wrote:
             | Lucky for us, most criminals are lazy.
        
               | thaumasiotes wrote:
               | I don't think this comes from them being lazy. I think
               | this comes from them not being aware of (1) the defense;
               | and (2) the mitigation. It's an example of security
               | through obscurity.
        
               | monocasa wrote:
               | Or even if they know about the defense and the
               | mitigation, it is additional work. In my work in the
               | formal economy I rarely get to ship the technically best
               | and most complete solution but instead a compromise 'MVP'
               | that'll receive more work only if the problem proves to
               | demand it. I expect the same holds true in the informal
               | economy.
        
           | johnmaguire2013 wrote:
           | > I follow a lot of these folks on Twitter and the majority
           | of what they post are their findings on the bajillionth
           | strain of whatever malware is in vogue at the moment.
           | 
           | Anyone in particular you recommend following?
        
             | wyxuan wrote:
             | krebsonsecurity, notdan, donk_enby are all good cybsec
             | follows. you can probably find others from people that they
             | follow/rt
        
         | tgsovlerkhgsel wrote:
         | The "create customized binaries" issue can be bypassed by
         | treating unknown hashes as suspect and blocking them.
         | 
         | Many systems already do something similar, e.g. I think I've
         | seen Chrome put speed bumps and scary warnings on rare files.
         | 
         | This forces malware authors to choose between the "hey this is
         | the first time we see this file on the planet, it's probably
         | bad" warning and serving the same hash to many victims.
         | 
         | A more extreme approach is binary whitelisting - if the hash
         | isn't explicitly approved, it doesn't run at all.
         | 
         | Hashes are also useful to uniquely identify what you're talking
         | about. Even for a fully polymorphic sample, being able to tell
         | other researchers/defenders "hey, I found ab512f... on my
         | network, and it does X" is useful even if nobody else has the
         | same hash, because then they can get a copy of that sample and
         | analyze it themselves.
        
         | Aerroon wrote:
         | What they really need is a whitelist of checksums instead.
         | Preferably not md5 though.
        
           | noinsight wrote:
           | That is what Windows Defender Application Control does. It's
           | probably the most cutting edge solution on the market
           | actually.
           | 
           | It's a pure whitelisting solution where every single
           | executable and kernel driver needs to have an approved
           | digital signature or matching hash value or they won't be
           | permitted to run.
           | 
           | It's virtualization assisted and can't be disabled without
           | rebooting and if you use a digitally signed policy and
           | someone tries to remove it, the machine will refuse to boot.
           | 
           | The coolest thing is, it even expands to the scripting
           | languages built-in to Windows so PowerShell execution is
           | restricted according to policy etc.
           | 
           | In practice of course, it's a big pain in the ass to manage -
           | many software are not digitally signed etc.
           | 
           | Every single artifact of every program needs to be digitally
           | signed or have a matching hash in the policy or they won't be
           | permitted to run.
           | 
           | For example, suppose a software installer: the .msi itself is
           | digitally signed so can easily be permitted to run... But
           | then, during installation it unpacks an .exe into %temp% that
           | isn't digitally signed and attempts to run that - oops, won't
           | run. I've come across even Microsoft software that does this.
           | 
           | https://docs.microsoft.com/en-us/windows/security/threat-
           | pro...
        
             | sydd wrote:
             | How does it handle scripts? E.g. my virus is a python
             | script bundled with the known and clean python executable
        
         | epr wrote:
         | That is simply a misuse of checksums. Checksums are for
         | verifying that your binary matches the one you thought you were
         | getting. The more people using them for identifying viruses,
         | the more common polymorphic viruses will be, especially
         | considering how trivial they are to implement.
        
       | FDSGSG wrote:
       | > Also curious, the malware comes with a mechanism to completely
       | remove itself, a capability that's typically reserved for high-
       | stealth operations.
       | 
       | What? Even random hackforums bots have this feature.
        
         | dapids wrote:
         | And competent malware author can add that to some level of
         | sophistication, it's a no brainer. (Agreeing)
        
       | fortran77 wrote:
       | Apple makes claims that they're "secure" but is that really true?
       | Is their fundamental design really different from anyone else's?
       | 
       | No word in this article about the infection vector.
        
         | Sebb767 wrote:
         | As long as you can execute anything you want on your Mac, you
         | can execute malicious code. Without the infection vector (which
         | is probably not known yet), it is impossible to say whether it
         | is user error or a security vulnerability.
        
         | 13415 wrote:
         | Most of Apple's security is theater and instead used to bind
         | developers to their infrastructure and make it harder to use
         | software outside their tightly controlled app stores.
         | 
         | You used to be able to get the password of encrypted home
         | directories from the page file with a simple grep command for
         | many years without any attempts by Apple to fix it, that alone
         | should tell you how secure Macs are.
        
           | KarlKemp wrote:
           | "You used to be able..." "... with no attempt to fix" is
           | somewhat contradictory.
           | 
           | And, of course, the whole comment is inane. The MacOS
           | security model these days very effectively prevents apps from
           | accessing data outside their assigned sandbox. Meanwhile,
           | distributing outside the App Store still requires only a
           | (strict) subset of the steps required for distribution in the
           | App Store.
        
             | 13415 wrote:
             | It took them about three major OS updates to fix this
             | problem.
        
           | zepto wrote:
           | > You used to be able to get the password of encrypted home
           | directories from the page file with a simple grep command
           | 
           |  _You already have to be root to do this._
           | 
           | This tells you nothing about Mac security.
        
             | 13415 wrote:
             | Right, no need to encrypt anything, just don't set any read
             | permissions. Problem solved!
        
               | zepto wrote:
               | Nobody is saying this other than you.
        
               | 13415 wrote:
               | You insinuated it, though, by claiming that it wasn't a
               | security problem as it required root. If you think
               | failure to encrypt (leaking the passphrase) is not a
               | problem because the exploit requires root, then you
               | understand absolutely nothing about security.
               | 
               | By the way, the reason why I mentioned this particular
               | example is that it almost certainly proves malicious
               | intent. It is not credible that a team of security
               | engineers at Apple developed file vault without thinking
               | at all about locking memory for the passphrase and then
               | continued not to think about it for years. They basically
               | only fixed it (and first even in the wrong way) after it
               | became folklore on every second Mac fan site. (Lost your
               | file vault password? Don't despair: Just copy & paste
               | this in the terminal.)
               | 
               | There is 0 reason to trust Apple on security.
        
               | zepto wrote:
               | > You insinuated it, though _by claiming that it wasn 't
               | a security problem as it required root_
               | 
               | I did make any such claim. You are seeing things where
               | they aren't.
               | 
               | Encryption at rest is very important.
               | 
               | However, it's worth putting different risks into
               | perspective.
               | 
               | What you can do if you have root, simply is a different
               | category of risk than what you can do without. Pointing
               | that out insinuates nothing.
               | 
               | > By the way, the reason why I mentioned this particular
               | example is that it almost certainly proves malicious
               | intent.
               | 
               | Obviously not, because otherwise they wouldn't have fixed
               | it.
               | 
               | That you can find a security problem from the past that
               | has now been fixed, is the precise _opposite_ of evidence
               | that a company doesn't care about security.
        
             | lofi_lory wrote:
             | On many systems you were able to get root access by simply
             | leaving the password empty.
             | 
             | While relevant getting root is still different from getting
             | the encryption keys.
        
               | zepto wrote:
               | What systems, and when?
        
             | ric2b wrote:
             | Isn't the page file on disk? Can't you just force shutdown
             | and then boot an alternative OS that ignores the file
             | permissions and read it?
        
               | zepto wrote:
               | Who knows - the problem was fixed long ago.
        
             | netsec_burn wrote:
             | I'm a different security guy from the one above. I have a
             | root exploit in macOS that Apple has refused to discuss
             | with me for half a year. I told them it affected Big Sur
             | (even since the betas) and they were unfazed. I have
             | reached out to everyone I know at Apple to schedule a demo
             | or even a phone call and they've been met with silence from
             | their product security team. How is that for some
             | perspective on Apple's security?
        
               | fortran77 wrote:
               | One of the oddest things about Apple's reaction is they
               | come to Hacker News and try to bury people talking about
               | real problems that real users see.
        
               | edoceo wrote:
               | Publish your research
        
               | netsec_burn wrote:
               | Can't do it if there is no way to keep my lights on and
               | food on my table.
        
               | zepto wrote:
               | Apple routinely interacts with security researchers, and
               | pays bug bounties.
               | 
               | It sounds like you haven't told them _any technical
               | details about the exploit, at all_. If this is the case,
               | then they have no way to tell you apart from a timewaster
               | or scammer.
               | 
               | If all you have done is tell them you have an exploit and
               | tried to schedule a meeting, it's unlikely they'll take
               | you seriously.
        
               | netsec_burn wrote:
               | In fact, I have given them technical details as to what
               | the capabilities are and the affected macOS versions. I
               | refuse to submit all of my research without them telling
               | me roughly what the bounty ranges are for the
               | vulnerabilities in the same class/equivalent impact.
               | 
               | I've done many bounty programs in the past. Companies
               | will always choose to pay nothing or close to nothing
               | when it is favorable to them. Apple refuses to share what
               | the bounty ranges are given technical information about
               | it, and asked me to submit the entirety of my research
               | without having any idea of what the compensation may be.
               | So they made it impossible for me and perhaps many others
               | to help improve the security of their OS ethically.
               | 
               | For every 1 company that pays you in bug bounties, 10
               | don't. If you're a security researcher, you can't afford
               | the possibility of getting nothing for months of
               | research.
        
               | easton wrote:
               | https://developer.apple.com/security-bounty/payouts/
               | 
               | It's somewhere between $5,000 and these numbers (as
               | $5,000 is the minimum).
        
               | netsec_burn wrote:
               | $5,000 is the minimum for all _categories_.
               | 
               | I have been unsuccessful at confirming with Apple that a
               | local privilege escalation (LPE) to root is in any one of
               | those categories, even though it's a widely understood
               | type of vulnerability. So I've been trying to get their
               | response on a category or at least have some frame of
               | reference so I can have a reasonable expectation of what
               | the ranges are.
               | 
               | That is what Apple will not provide.
        
               | zepto wrote:
               | > For every 1 company that pays you in bug bounties, 10
               | don't.
               | 
               | Do you have reason to believe that Apple doesn't pay, or
               | are you basing this on other companies behavior?
        
             | smoldesu wrote:
             | You're thinking of the shadow file, which I don't believe
             | OP is referring to.
        
         | smoldesu wrote:
         | Social engineering is always the most effective infection
         | vector, and the more secure you think your device is, the more
         | vulnerable you are. This is particularly the case on the higher
         | end of the spectrum, where I've seen iPhone users click on
         | obvious phishing scams with the justification of "oh, I'm on
         | iOS, they can't get my data". I've seen a lot of the same
         | mentality exist in the desktop Mac space, where most users
         | perform two types of installs: relatively safe ones through the
         | App Store, or venturing out into the wild west of the internet
         | and bringing home a random binary that hopefully does what it
         | says.
         | 
         | Honestly, I'm tired of HN users downvoting anything critical of
         | Apple. It only further confirms how much of an echo chamber
         | this place can be.
        
           | hu3 wrote:
           | > Honestly, I'm tired of HN users downvoting anything
           | critical of Apple. It only further confirms how much of an
           | echo chamber this place can be.
           | 
           | No kidding I saw that first hand while naively asking a
           | harmless question about Cuda support on M1. Got downvoted to
           | oblivion and added the comment later:
           | https://news.ycombinator.com/item?id=26149344
        
           | 2cb wrote:
           | This attitude from users is a very real problem, but I'd
           | argue it's platform agnostic.
           | 
           | For instance there were headlines around the internet not too
           | long ago stating Android is more secure than iOS based on
           | claims made by Zerodium.
           | 
           | Any Android user who read that may well have the same
           | attitude you've described from iOS users. And it's
           | potentially far more dangerous on Android because it allows
           | you to sideload apps.
           | 
           | You can even extend it to Windows. Your average user will buy
           | a laptop preinstalled with McAfee [1] and think "no need to
           | worry about viruses now because I've got an antivirus."
           | 
           | Don't get me wrong I agree it's perfectly reasonable to be
           | critical of Apple when it's warranted, but we don't yet know
           | if it is in this case. It's entirely possible (and fairly
           | likely) this malware was delivered as a trojan using pop ups
           | the user had to interact with. If that's the case you can't
           | blame Apple for user error, especially when trojans exist for
           | every single OS that allow the user to install software from
           | the internet.
           | 
           | > most users perform two types of installs: relatively safe
           | ones through the App Store, or venturing out into the wild
           | west of the internet and bringing home a random binary that
           | hopefully does what it says.
           | 
           | This seems to imply any software installed from the App Store
           | is safe while anything from outside the App Store is
           | dangerous.
           | 
           | Isn't the implication that App Store downloads must
           | automatically be safe falling into the same trap you're
           | criticising here?
           | 
           | Quotes from the article:
           | 
           | > Developer ID Saotia Seay (5834W6MYX3) - v1 bystander binary
           | signature revoked by Apple
           | 
           | > Developer ID Julie Willey (MSZ3ZH74RK) - v2 bystander
           | binary signature revoked by Apple
           | 
           | So both of these malware packages were signed by Apple.
           | 
           | Seems like relying on Apple's review processes to determine
           | how safe a particular binary is only provides the same false
           | sense of security you're describing.
           | 
           | [1] https://www.youtube.com/watch?v=bKgf5PaBzyg (sorry,
           | couldn't resist)
        
             | codezero wrote:
             | I'm curious if more will come out of this but it sounds
             | like the attackers probably already attacked Julie or her
             | employer (she appears to have worked for Tile, Oculus and
             | others) and just signed the app with her ID. I bet they
             | have a ton of these credentials in their pocket from
             | previous infections.
             | 
             | Since this didn't go through the App Store it probably
             | wasn't reviewed but the developer's certificate would be
             | checked when it's run - hence the revocation now.
        
       | bitwize wrote:
       | The fact that it can remove itself sounds like it _might_ be a
       | PoC that made it out of some lab and into the wild.
       | 
       | Kinda parallels some of the theories about COVID...
        
       | tiagod wrote:
       | It's probably phoning home, although the article isn't clear on
       | that, so perhaps the people controlling it are only using it in
       | attacks targeting specific people or organisations.
        
         | djrogers wrote:
         | From the second paragraph of TFA:
         | 
         | "Once an hour, infected Macs check a control server to see if
         | there are any new commands the malware should run or binaries
         | to execute."
        
           | tiagod wrote:
           | Oops, missed that. Then I don't see what stumps the pros so
           | much.
        
       | qzw wrote:
       | A pretty polished delivery mechanism, including native M1 binary,
       | with no payload. Sounds like R&D in preparation for a real
       | deployment, or possibly a demo for a package that will be sold to
       | third parties.
        
         | Sebb767 wrote:
         | It might also be a targeted attack. I can easily imagine a TLA
         | using a wide net to get the target infected, but only
         | delivering the payload to the intended target (which can be
         | detected by IP, for example).
        
           | 2cb wrote:
           | If this is the work of a nation state attacker, as
           | sophisticated attacks often are these days, this is a very
           | likely possibility.
           | 
           | We saw just recently the likely Russian backed attack on a
           | wide net of US companies and even government agencies.
           | 
           | I doubt Macs are used often in US gov agencies but they are
           | used in tech companies for example. That latest attack hit a
           | lot of seemingly random US companies but if you want to
           | destabilise and cause loss of trust it is a very effective
           | strategy.
           | 
           | I happen to know someone who works high up in one of the
           | affected companies where the malware had been sitting on the
           | server for a few months passively gathering information
           | before the attack. They struck in the middle of the night
           | local time and erased all the backups before they begun
           | running ransomware across the entire corporate network.
           | 
           | Thankfully the attack was caught within 10 minutes and they
           | were able to recover fairly quickly once the dust settled,
           | but they've got especially good security. Such an attack
           | could have done far more damage to a company.
           | 
           | And if, like me, your first thought was "you're seriously
           | saying they have good security but no off-site backups?"
           | Yeah, I know. I'd bet they have regular off-site backups now
           | though...
        
             | leephillips wrote:
             | Macs on the desktop are quite common in the US government.
        
               | 2cb wrote:
               | TIL. Honestly wouldn't have guessed that.
               | 
               | Makes the theory this is a nation state attack far more
               | plausible then.
        
       | Gys wrote:
       | > For those who want to check if they're Mac has been infected,
       | Red Canary provides indicators of compromise at the end of its
       | report.
       | 
       | At the end of https://redcanary.com/blog/clipping-silver-
       | sparrows-wings/
        
         | JadeNB wrote:
         | Specifically, here's the list of indicators common to v1 & v2,
         | quoted from the article:
         | 
         | > ~/Library/._insu (empty file used to signal the malware to
         | delete itself)
         | 
         | > /tmp/agent.sh (shell script executed for installation
         | callback)
         | 
         | > /tmp/version.json (file downloaded from from S3 to determine
         | execution flow)
         | 
         | > /tmp/version.plist (version.json converted into a property
         | list)
        
         | lapcatsoftware wrote:
         | PROTIP: Select your LaunchAgents and LaunchDaemons folders in
         | /Library and ~/Library, select Folder Actions Setup in the
         | Services menu, and enable folder actions. You can use "add -
         | new item alert.scpt" to be notified whenever a new item is
         | added to those folders.
         | 
         | For even more protection, I flat out locked the folders
         | ~/Library/LaunchAgents and ~/Library/LaunchDaemons in Finder,
         | though this could interfere with some software you use.
        
           | inspector-g wrote:
           | There's also the fantastic app BlockBlock to help with this.
           | Notifies you of changes to those folders and allows you to
           | accept/deny whatever an app is trying to change.
        
           | pingiun wrote:
           | Thanks this is a very useful tip, the folder actions in
           | general seem pretty useful
        
           | carlosrg wrote:
           | That's great advice, never occurred to me despite knowing
           | about folder actions. Thanks!
        
         | 2cb wrote:
         | Sounds like this was discovered by MalwareBytes so if you have
         | that installed a scan should let you know too.
         | 
         | Also, Little Snitch will show you the connections it makes
         | every hour. Before anyone says it - in macOS 11.2, Apple
         | removed the exclusion list that allowed their own software to
         | bypass firewalls.[1]
         | 
         | [1] https://blog.obdev.at/a-wall-without-a-hole/
        
         | TacticalCoder wrote:
         | The guys at redcanary.com are giving cryptographic hashes as...
         | MD5.
         | 
         | 2021. MD5.
         | 
         | I can't wait to hear the apologists.
         | 
         | EDIT: instead of downvoting it would be nice to explain why
         | it's fine to use and encourage the usage of MD5 in 2021.
        
           | jcrawfordor wrote:
           | I hate to be an apologist, but MD5 is the norm in IOCs for
           | two reasons:
           | 
           | 1) All of the tools expect and produce MD5 because it is the
           | convention. Computing MD5 hashes of every file on disk or
           | passing through a network is a relatively common forensic
           | operation right now, SHA256 is not.
           | 
           | 2) IOCs are not intended to be used in scenarios in which a
           | malicious collision presents a problem (no one would want
           | to... mask their malware to still look like malware?) so
           | there is little downside to carrying on the convention.
           | 
           | While I would recommend against MD5 in most modern
           | applications, if nothing else to avoid having discussions
           | like this all the time, before upsetting an entire ecosystem
           | of tools it is important to consider whether or not the known
           | weaknesses of MD5 actually pose a problem. In this case, a
           | matching hash is the _bad_ state, and so there is no real
           | impact of a preimage attack.
        
           | Sebb767 wrote:
           | I didn't downvote you, but this use case of MD5 is fine - you
           | want to verify whether the binary matches the one they have.
           | You could spend a few hundred bucks on AWS to have a binary
           | which matches that as well, but what would be the point -
           | have users delete it?
           | 
           | Now, if you used it to verify whether a binary was _secure_,
           | this would be problem. But in this case, the (still unlikely)
           | possibility of a false positive is not really a threat.
        
             | gruez wrote:
             | >Now, if you used it to verify whether a binary was
             | _secure_, this would be problem
             | 
             | Even that's probably fine. Collision attacks require the
             | attacker to control both inputs. In the case of code
             | signing this would mean the publisher is in on it, in which
             | case you're already screwed.
        
             | 0x1DEADCA0 wrote:
             | Why spend a few hundred bucks on amazon when you could just
             | wait a few days for your laptop to do the same ^.^
        
           | [deleted]
        
           | KarlKemp wrote:
           | This is not a cryptographic use of hashing, but just
           | simplification/identification. There is nothing to gain from
           | being able to impersonate some malware because you could
           | always just include the malware itself to match any hash.
        
           | 2cb wrote:
           | While MD5 hashes are insecure for hashing passwords or other
           | sensitive data, they're still fine for verifying the
           | integrity of data if you are simply verifying a file has not
           | been corrupted.
           | 
           | If MD5 was being used to verify a piece of software you
           | actually want, it's not secure as it's not collision
           | resistant.
           | 
           | But since we can be quite sure no one has made a file that
           | shares an MD5 hash with this new strain of malware, MD5 is
           | sufficient as a checksum in this use case.
           | 
           | You're correct to point out that newer hashes are still
           | preferable though, simply to get out of the habit of using
           | MD5 if nothing else. I assume you got downvoted because MD5's
           | weaknesses aren't relevant in this specific instance. But
           | still they could just have easily used SHA256.
        
           | hn_throwaway_99 wrote:
           | Because why not, what's the threat here? The purpose of the
           | hash is so an investigator can easily check if a given
           | package file is this malware. Note that (a) it is of course
           | trivial for the malware author to change something in the
           | package to change the hash, so a _different_ hash is not a
           | guarantee the package is clean and (b) the collision problems
           | of MD5 aren 't really a problem here, as why would someone
           | else have an incentive to make their file _falsely_ look like
           | this malware.
           | 
           | You were downvoted (or, rather, I downvoted you) for your
           | knee jerk "MD5 BAD!" comment without thinking through what
           | the problems would be, and worse, you took an aggressive tone
           | from the beginning ("I can't wait to hear the apologists")
           | that just makes you sound like a jerk.
        
           | A12-B wrote:
           | I knew a website that used MD5 to generate unique colors for
           | usernames. There's a finite amount of colors in rgb
           | colorspace and it doesn't really matter if you get a
           | collision because it's just username colors. So there's
           | definitely still a realistic place for md5 in this world.
        
           | nkrisc wrote:
           | Could you please explain why this specific use of MD5 is
           | inappropriate?
           | 
           | I don't really see how the weaknesses of MD5 are applicable
           | here but I'd like to learn if I'm wrong.
        
           | hamilyon2 wrote:
           | You are actually right. Given how one can forge an md5 of any
           | file, some malware authors could match a "signature" so to
           | speak of a very popular useful file, leading to mass false
           | detections, for example.
        
             | upofadown wrote:
             | MD5 is not broken in a way where that would be possible.
             | The best that could be done by such a prankster would be to
             | create two pieces of malware with the same MD5. I guess
             | that could be confusing or something...
        
       | mensetmanusman wrote:
       | News like this reminds me why it will take a lot to adopt crypto
       | currencies.
       | 
       | For all we know, this software could just be quietly collecting
       | wallet passwords waiting for an opportune moment to attack.
       | 
       | With the sophistication of red team hackers from Russia, China,
       | NK, and Iran, why would we want to computers for such critical
       | infrastructure as payment?
        
         | brundolf wrote:
         | I'm no crypto apologist, but dollars are hardly less digital
         | than bitcoins at this point
        
           | hippich wrote:
           | most of types of dollar transactions are reverse-able.
           | stealing bitcoin is like someone getting mugged and cash
           | stolen (not credit cards and such).
        
           | MR4D wrote:
           | You make a good point, but I can still use dollars in offline
           | mode - i.e. the paper in my pocket - even when I am nowhere
           | near a computer or communication device.
           | 
           | That feature still makes them desirable...especially in
           | places like frozen Texas this past week.
        
             | MR4D wrote:
             | For whomever downvoted this comment, you should have been
             | there in 29 degree weather when I was buying 6 gallons of
             | bottled water with cash.
             | 
             | There was no cell service and the place could only take
             | cash. It was a 30 minute wait in the cold, but I used the
             | water both for drinking, and so I could flush my toilets
             | (city water was not working).
             | 
             | So if you disagree, fine - in fact share your opinion -
             | I'll probably learn something. But by downvoting it showed
             | ignorance.
        
               | cyberpunk wrote:
               | Why am I missing? That's only like -2C which practically
               | t-shirt weather isn't it?
        
               | IAmGraydon wrote:
               | I'm guessing that's a Texan.
        
               | forgotmypw17 wrote:
               | I'm quite cold-tolerant, but I would hardly consider
               | below freezing (0C) to be T-shirt weather. I would need
               | 3+ layers to feel comfortable in that long-term.
               | 
               | It's only T-shirt weather if you're moving from one
               | heated space to another.
        
               | jackson1442 wrote:
               | Presumably OP is from Texas, meaning the wind chill was
               | likely around 15F, roads were icy as hell, and there was
               | no (or extremely limited) power or water available.
        
         | alcover wrote:
         | Considerations about fingerprint in this thread made me think
         | that if I had crypto wallets, I would change relevant file
         | names and extensions, and patch binaries to get hashes
         | unrecognizable by a stealer implant.
        
       | nom wrote:
       | M1, 153 countries, AWS+Akamai as control infra? Yeah that has to
       | be a tech demo.
        
         | 13415 wrote:
         | They should rent this out as an installer service to Apple
         | developers who are sick of Gatekeeper and complex app review
         | requirements.
        
           | 2cb wrote:
           | Aren't the app review requirements only necessary to submit
           | apps to the App Store? Pretty certain you can still
           | distribute software through your own channels without Apple
           | reviewing it, you just need to apply as developer with Apple
           | so the app is signed, then Gatekeeper doesn't get in the way.
           | 
           | Also I notice that Homebrew packages run just fine without
           | needing to be signed by Apple, not sure how but that's a
           | possibility if you really don't want to jump through Apple's
           | hoops.
        
             | mchristen wrote:
             | Signing doesn't keep Gatekeeper away, you need to also get
             | your app notarized, which involves uploading the app to
             | Apple's servers.
        
               | 2cb wrote:
               | Ah fair enough, didn't know they were so strict with
               | software distributed outside the App Store.
               | 
               | I say the best thing to do is distribute your software
               | using Homebrew then. As well as it being super convenient
               | since it's effectively the same as apt or any other
               | package manager common to other Unix systems, it bypasses
               | Gatekeeper.
               | 
               | Got curious how and it's amazingly simple, it literally
               | just provides an environment variable that deletes that
               | "quarantine" xattr metadata.[1]
               | 
               | Tell your users Homebrew is the supported installation
               | method and you can skip right over Gatekeeper.
               | 
               | [1] https://github.com/Homebrew/homebrew-
               | cask/issues/85164
        
               | stephenr wrote:
               | Notarizing is not strict by any definition of the term,
               | unless you consider "scans your software for malicious
               | content, checks for code-signing issues" to be strict?
               | 
               | It's an automated system.
               | 
               | Also, if you tell me your app is only installable via
               | Home-brew, I'm not installing it. Comparing Homebrew to
               | Apt, is like comparing a playdough and crayon sandwich,
               | with an actual sandwich.
               | 
               | Sure, they both look kind of similar at a distance, and
               | technically you can eat both of them, but one is really
               | not well thought out, and if you say you don't like how
               | it tastes, the child who made it will get upset with you.
        
               | 2cb wrote:
               | What is your specific beef with Homebrew? You insult it
               | but don't provide any reason it's so much more inferior
               | compared to apt.
               | 
               | There are some ways Homebrew is actually _more_ secure
               | than apt. For example in order to do anything with apt
               | you must give it superuser rights. The same is not true
               | of Homebrew, which installs binaries in userspace and
               | explicitly tells you to never use sudo.
               | 
               | A Homebrew installer is a simple Ruby script you can
               | easily audit for yourself.
               | 
               | The packages are SHA256 signed to ensure code integrity.
               | 
               | You can point it at a specific repo you trust and tell it
               | to get a package from there.
               | 
               | All downloads are done through a TLS connection, which is
               | not the case for apt.
               | 
               | And of course the whole thing is open source.
               | 
               | I fail to see where the hate is coming from.
               | 
               | > Notarizing is not strict by any definition of the term,
               | unless you consider "scans your software for malicious
               | content, checks for code-signing issues" to be strict?
               | 
               | I mean, having to register as a developer, get a
               | certificate to sign your apps, and still have to send off
               | your software to Apple each time you update it before you
               | can distribute it on your own website is pretty "strict"
               | compared to every other OS.
               | 
               | It doesn't seem to do much to prevent malware in the wild
               | either.
        
               | easton wrote:
               | Last I checked homebrew doesn't ask for root every time
               | because it changes the permissions on /opt/homebrew (or
               | /usr/local/bin if you are on Intel) to allow you to
               | install software as non-root. This is still extremely
               | insecure, as you can now install/remove/upgrade software
               | on the system without root's permission, which is
               | annoying if more than one user uses the device. Not to
               | mention other applications that you run can now also
               | write to these directories and blow stuff up without your
               | permission, whereas if the permissions were set as
               | default you'd get a password prompt at least.
               | 
               | I still use brew (because it has more apps than
               | macports), but why in the world they made this decision
               | rather than using, say ~/Applications (the macOS
               | recommended practice for software that only one user
               | needs) or ~/homebrew is beyond me (granted, apt doesn't
               | do this either, but I'm 99% sure that you can do it with
               | yum and it is how scoop works on windows).
        
               | yjftsjthsd-h wrote:
               | I don't have a stake in this fight, but some of those
               | don't really seem like advantages over apt -
               | 
               | > The packages are SHA256 signed to ensure code
               | integrity.
               | 
               | And apt uses GPG signatures.
               | 
               | > You can point it at a specific repo you trust and tell
               | it to get a package from there.
               | 
               | Exactly like apt?
               | 
               | > All downloads are done through a TLS connection, which
               | is not the case for apt.
               | 
               | Since apt enforces GPG signatures by default, this could
               | be a privacy issue but shouldn't be a security issue.
               | 
               | Unless you meant only for the sudo/non-sudo to be your
               | point on being better than apt and the rest was just
               | defending homebrew?
        
               | forgotmypw17 wrote:
               | Adding TLS into the picture introduces many extra failure
               | modes. Examples: clock out of sync, wrong version of SSL,
               | certificate signing problem. All of these things would
               | cause your install to become non-upgradeable by a non-
               | expert.
        
               | yjftsjthsd-h wrote:
               | > Notarizing is not strict by any definition of the term,
               | unless you consider "scans your software for malicious
               | content, checks for code-signing issues" to be strict?
               | 
               | I'd consider "you can't ship software for people to run
               | on their own machines without first uploading it to Apple
               | to get their seal of approval" to be quite strict,
               | regardless of what Apple actually does / looks at when
               | you upload it to them. I don't care how low their bar is,
               | I don't care that it's automated, I frankly wouldn't care
               | if it was a complete automatic rubber-stamp with no
               | checking at all - Apple forcing every developer to go
               | through them is draconian.
        
               | tokamak-teapot wrote:
               | It does seems inconvenient but also intended to help keep
               | the platform- and therefore users- secure. I'm not sure
               | the word 'draconian' fits here, especially considering
               | its original meaning and historical uses.
        
               | forgotmypw17 wrote:
               | It makes the device non-serviceable without a central
               | authority. You could not do anything with it offline.
               | 
               | That means it is no longer a general-purpose computer,
               | but an extension of Apple's cloud.
        
       | Traubenfuchs wrote:
       | Hold on:
       | 
       | > Amazon Web Services and the Akamai content delivery network
       | 
       | Why isn't AWS investigating?
        
         | Sephr wrote:
         | Perhaps they are bound by a National Security Letter.
        
         | rini17 wrote:
         | They do forward abuse notices to whoever rents the
         | infrastructure. Perhaps some kind of investigation happens if
         | these are not acted upon.
        
           | kortilla wrote:
           | Lol, so Apple would notify Amazon of some kind of advanced
           | malware, and Amazon's first step would be to notify the
           | malware authors?
        
             | rini17 wrote:
             | LOL, do you think whoever is managing the malware has
             | themselves paid AWS to host it?
             | 
             | This kind of affair usually gets escalated to CEO level.
             | Bezos will pick up the phone if Cook calls. But usual plebs
             | business goes via abuse notices as I described.
        
             | jimrandomh wrote:
             | Generally speaking, malware authors build their command and
             | control networks out of compromised computers owned by
             | third parties. After all, they're already in the business
             | of compromising computers, and using their own computers
             | would leave an unnecessary trail back to them.
        
       | williesleg wrote:
       | Payload is 'help I'm being held captive in a mac manufacturing
       | plant against my will'
        
       | awinter-py wrote:
       | yeah but it also has the biologically useful ability to
       | efficiently synthesize ATP so we may incorporate it into our
       | germline rather than uninstalling it
        
       | TacticalCoder wrote:
       | > ~/Library/._insu (empty file used to signal the malware to
       | delete itself)
       | 
       | The article apparently doesn't explain how to protect against the
       | malware?
       | 
       | Cannot hurt to manually create ~/Library/._insu right? (not that
       | it seems to offer great protection but I take it cannot hurt?)
       | 
       | Anyone got any idea as to how to harden OS X a bit against this
       | malware?
        
         | 2cb wrote:
         | We don't actually know what attack vector it's using so it's
         | pretty much impossible to say at this point. It could be a
         | simple trojan or it could be an advanced 0day chain. We have no
         | clue.
         | 
         | There'd be no point manually creating the file, by the time the
         | malware sees it, you are already infected.
         | 
         | My advice: keep your OS and browser and everything else
         | updated, use uBlock Origin in the browser, and use a network-
         | wide ad blocker (Pi Hole, AdGuard Home - personally I prefer
         | the latter) with a few malware blocklists and keep them
         | updated.
         | 
         | Malwarebytes were the ones who discovered how big this thing is
         | so you can install that on your Mac and run scans.
         | 
         | You may also want to invest in Little Snitch which won't
         | necessarily protect you against an infection but it will alert
         | you to the calls the malware keeps making to its C&C servers.
         | It's also entirely possible the self-destruct mechanism they
         | found is triggered by such software being installed on the
         | machine. Past Mac malware often removes itself if it detects
         | Little Snitch.
         | 
         | And, obviously, don't install random software from shady
         | sources, but I assume anyone on HN knows this already.
        
           | forgotmypw17 wrote:
           | One more: browse the web without JS, except on trusted sites,
           | and tell everyone else to sod off.
           | 
           | Bonus: Avoid a lot of low-quality content.
        
           | pfortuny wrote:
           | See PROTIP above by lapcatsoftware (several threads above)
        
       | soheil wrote:
       | So without having to install yet another app on my mac which
       | could be riddled with malwares itself disguised in the form of an
       | anti-virus program and typically notorious for thrashing the
       | machine, what's the best way to find out if my machine is
       | infected?
        
         | mceachen wrote:
         | Look at the files in the "Detection opportunities" section at
         | the bottom of the article: https://redcanary.com/blog/clipping-
         | silver-sparrows-wings/
        
       | [deleted]
        
       | afrcnc wrote:
       | Actual source, not this wattered-down Ars rewrite:
       | https://redcanary.com/blog/clipping-silver-sparrows-wings/
        
       | shortandsweet wrote:
       | Does hn shadow ban? I saw 1 comment count but none in the thread.
        
         | minikites wrote:
         | Yes, use the "showdead" toggle on your profile page.
        
           | Spare_account wrote:
           | I have showdead enabled, there are no dead comments on this
           | thread and the current comment count matches the number of
           | visible comments. I suspect the GP comment's observation was
           | due to a timing issue
        
             | codetrotter wrote:
             | Or someone may have vouched for the comment in the meantime
        
             | shortandsweet wrote:
             | Hmn I dunno what it was to be honest. I checked multiple
             | times and refreshed, at least I thought I did. Shrug. My
             | app that I'm using has no settings to showdead so I
             | wouldn't be surprised if the refresh didn't work as it
             | doesn't from time to time.
             | 
             | I really need a new app. Any recommendations?
        
               | dylan604 wrote:
               | Why does it need an app?
        
               | shortandsweet wrote:
               | Mainly because I have and use 5 browsers and can never
               | remember which one I logged into. There is not a chance
               | that I'll remember my password. I wish. There's just way
               | too many. Also don't use a pw manager on my phone due to
               | security reasons.
        
               | 2cb wrote:
               | I don't like password managers in general mostly because
               | the ones that do all the syncing up magic you want are
               | closed source.
               | 
               | Recently I found out about Bitwarden and have been using
               | it for a couple weeks. No regrets, it's great.
               | 
               | It uses E2EE to sync passwords between devices and the
               | clients are all open source. It's also undergone multiple
               | third party security audits.
               | 
               | Makes everything a billion times more convenient and I
               | feel safe trusting it.
        
               | acct776 wrote:
               | Modern phones, when configured correctly, are more secure
               | than their desktop counterparts.
        
               | jtbayly wrote:
               | Have you tried a web browser? Safari is nice.
        
         | Sebb767 wrote:
         | As far as I know, it only hides dead (-4 votes) or flagged
         | comments. You can enable them with the "showdead" toggle in
         | your profile, as the sister comment correctly pointed out (I'd
         | recommend doing so, to see through biases a bit).
        
         | ok123456 wrote:
         | yes
        
         | hombre_fatal wrote:
         | That's not a very useful assumption to make when you see a
         | mismatched counter. It's almost always stale cache, like after
         | comment deletion.
        
       | social_quotient wrote:
       | How do they get or approximate an infection count?
        
         | bognition wrote:
         | Looks like the researchers are running a sinkhole that the
         | malware phones home to. From there it's a matter of counting
         | unique ips and dropping cookies.
        
         | Sebb767 wrote:
         | At the end of the article:
         | 
         | > "To me, the most notable [thing] is that it was found on
         | almost 30K macOS endpoints... and these are only endpoints the
         | MalwareBytes can see, so the number is likely way higher,"
         | Patrick Wardle [...] wrote in an Internet message.
         | 
         | So it seems MalwareBytes detected it on 30k customers' Macs.
        
       ___________________________________________________________________
       (page generated 2021-02-20 23:01 UTC)