[HN Gopher] Operation Triangulation: What You Get When Attack iP...
       ___________________________________________________________________
        
       Operation Triangulation: What You Get When Attack iPhones of
       Researchers
        
       Author : ruik
       Score  : 175 points
       Date   : 2023-12-27 15:47 UTC (7 hours ago)
        
 (HTM) web link (securelist.com)
 (TXT) w3m dump (securelist.com)
        
       | anotherhue wrote:
       | More important than getting their newly found exploits, you get
       | to know which of yours might be compromised. Prevents
       | counterintelligence.
        
       | hcarrega wrote:
       | Theres a talk on ccc today
        
       | sampa wrote:
       | NSA found out
       | 
       | PS anybody thinks that Apple cooperated on this cpu "feature"?
       | 
       | PPS remember how everybody laughed when Russia said iphones were
       | bugdoored?
        
         | chatmasta wrote:
         | Reading between the lines of TFA, it seems the researchers may
         | also suspect that to be the case:
         | 
         | > Our guess is that this unknown hardware feature was most
         | likely intended to be used for debugging or testing purposes by
         | Apple engineers or the factory, or that it was included by
         | mistake. Because this feature is not used by the firmware, we
         | have no idea how attackers would know how to use it.
         | 
         | However, keep in mind that this level of "bugdooring" is
         | possible without Apple's explicit cooperation. In fact, the
         | attackers don't even need to force a bug into the code. It
         | would probably be sufficient to have someone on staff who is
         | familiar with the Apple hardware development process (and
         | therefore knows about the availability of these tools), or to
         | simply get a copy of the firmware's source code. Sophisticated
         | attackers likely have moles embedded within Apple. But they
         | don't even need that here; they could just hire an ex-Apple
         | employee and get all the intel they need.
        
           | sampa wrote:
           | well of course nobody would have NSA_friendly_override() in
           | the source
           | 
           | plausible deniability is essential in such cases, hence the
           | term bugdoor
        
             | halJordan wrote:
             | This is the same conspiracy mindset of flat earthers, and
             | you deserve your own netflix mockumentary over it.
             | 
             | Because a bug is a bug, it's very nature means you cannot
             | prove it isn't malicious, therefore you take it as positive
             | proof of malice and sit pretty bc no one can prove a
             | negative.
        
               | LanzVonL wrote:
               | Are you posting from Eglin AFB? Which outfit are you
               | with?
        
               | Aerbil313 wrote:
               | We had backdoors, then PRISM revealed. We have bugdoors
               | now. No reason to think three letter glowies would like
               | to give up any amount of control. They have the 'power'
               | to straight up lie to the congress under oath, see
               | Clapper.
        
               | withinboredom wrote:
               | Your double negative made me laugh.
               | 
               | In all seriousness, I wish I could tell you that you're
               | wrong, but I can't.
        
           | xvector wrote:
           | I always get weird consultants reaching out to me on LinkedIn
           | asking for deets on my org's layout and - curiously - our
           | tech stack. They offer something like $500+ an hour but I
           | don't want to be complicit in some compromise. Private
           | intelligence is such a fascinating industry.
        
           | jetrink wrote:
           | Since they've gone to the trouble of protecting it with an
           | insecure hash, couldn't they also have designed this hardware
           | feature so that it could be completely disabled until the
           | device is rebooted? This vulnerability doesn't persist
           | through reboots, so it would be sufficient to have the
           | firmware lock the feature out during startup outside of
           | development or manufacturing contexts.
        
             | withinboredom wrote:
             | > This vulnerability doesn't persist through reboots
             | 
             | I suspect, once you stop receiving data from the device,
             | you just text it the invisible message every few minutes
             | until you start getting data again.
        
         | WhackyIdeas wrote:
         | NSA will have the same special relationship with Apple as they
         | do with AT&T.
        
         | halJordan wrote:
         | I just dont get this mentality. Here is proof positive (if you
         | believe attribution) that the NSA is using exquisite and exotic
         | techniques to force their way into iphones and you look at it
         | and come up with the exact opposite conclusion that Apple is
         | letting them into the iphone. Its not a backdoor if you're
         | smashing in the window.
        
         | photochemsyn wrote:
         | Based on past history, it would be more surprising if Apple
         | wasn't actively cooperating with the NSA, that was the case
         | with PRISM (wiki):
         | 
         | > "The documents identified several technology companies as
         | participants in the PRISM program, including Microsoft in 2007,
         | Yahoo! in 2008, Google in 2009, Facebook in 2009, Paltalk in
         | 2009, YouTube in 2010, AOL in 2011, Skype in 2011 and Apple in
         | 2012. The speaker's notes in the briefing document reviewed by
         | The Washington Post indicated that '98 percent of PRISM
         | production is based on Yahoo, Google, and Microsoft'"
         | 
         | With the rise of end-to-end encryption in the wake of the
         | Snowden revelations, this put large tech corporations in a
         | bind, given the conflict between consumer desire for secure
         | snoop-proof devices, and government desire for backdoor access.
         | Pressure might have been applies by government contracting
         | decisions, so no cooperation == no big government contract. The
         | general rise of end-to-end encryption also meant that things
         | like deep packet inspection along the trunk no longer worked,
         | putting a premium on breaking into devices to install
         | keyloggers etc.
         | 
         | All the fear of China doing this with Huawei (probably well-
         | justified fear) may have risen in part as projection by
         | politicians and insiders who knew the US government was doing
         | it already with Apple, Android, Intel, ARM, etc. The US
         | government has certainly retained legalistic justification for
         | such behavior, even though the Act expired in 2020[1]. Also,
         | corporations have been given retroactive immunity for similar
         | illegal activites before [2], so Apple has that precedent to go
         | by.
         | 
         | [1]
         | https://www.cjr.org/the_media_today/section_702_renewal_pres...
         | 
         | [2] https://www.aclu.org/news/national-security/retroactive-
         | tele...
        
       | soupdiver wrote:
       | https://streaming.media.ccc.de/37c3/relive/11859
        
         | contingencies wrote:
         | Begins @ 27:21
         | 
         | In addition contents of the presentation, in terms of
         | timeline...
         | 
         | 2018 (September): First undocumented MMIO-present CPU launched,
         | Apple A12 Bionic SOC.
         | 
         | 2021 (December): Early exploit chain infrastructure
         | backuprabbit.com created 2021-12-15T18:33:19Z, cloudsponcer.com
         | created 2021-12-17T16:33:50Z.
         | 
         | 2022 (April): Later exploit chain infrastructure
         | snoweeanalytics.com created 2022-04-20T15:09:17Z suggesting
         | exploit weaponized by this date.
         | 
         | 2023 (December): Approximate date of capture (working back from
         | "half year" quoted analysis period + mid-2023 Apple reports.
         | 
         | The presenters also state that signs within the code reportedly
         | suggested the origin APT group has used the same attack
         | codebase for "10 years" (ie. since ~2013) and also uses it to
         | attack MacOS laptops (with antivirus circumvention). The
         | presenters note that the very "backdoor-like" signed debug
         | functionality may have been included in the chips without
         | Apple's knowledge, eg. by the GPU developer.
         | 
         | So... in less than 3.5 years since the first vulnerable chip
         | hit the market, a series of undocumented debug MMIOs in the
         | Apple CoreSight GPU requiring knowledge of a lengthy secret
         | were successfully weaponized and exploited by an established
         | APT group with a 10+ year history. Kaspersky are "not
         | speculating" but IMHO this is unlikely to be anything but a
         | major state actor.
         | 
         | Theory: I guess since Apple was handed ample evidence of ~40
         | self-doxxed APT-related AppleIDs, we can judge the identity
         | using any follow-up national security type announcements from
         | the US. If all is quiet it's probably the NSA.
        
           | mike_hearn wrote:
           | It's really a pity they explain all the mistakes that helped
           | the malware be detected.
        
             | halJordan wrote:
             | It's not, it really isnt. Honestly just apply this
             | mentality to one other scenario to test the waters. We
             | should stop publishing yara rules because it flips our hand
             | to the malware makers? It's nonsense to even say.
        
         | mb4nck wrote:
         | The (first?) version of the real recording is now up:
         | https://media.ccc.de/v/37c3-11859-operation_triangulation_wh...
        
       | mike_hearn wrote:
       | That's pretty astonishing. The MMIO abuse implies either the
       | attackers have truly phenomenal research capabilities, and/or
       | that they hacked Apple and obtained internal hardware
       | documentation (more likely).
       | 
       | I was willing to believe that maybe it was just a massive NSA-
       | scale research team up until the part with a custom hash function
       | sbox. Apple appears to have known that the feature in question
       | was dangerous and deliberately both hidden it, whatever it is,
       | and then gone further and protected it with a sort of (fairly
       | weak) digital signing feature.
       | 
       | As the blog post points out, there's no obvious way you could
       | find the right magic knock to operate this feature short of doing
       | a full silicon teardown and reverse engineering (impractical at
       | these nodes). That leaves hacking the developers to steal their
       | internal documentation.
       | 
       | The way it uses a long chain of high effort zero days only to
       | launch an invisible Safari that then starts from scratch, loading
       | a web page that uses a completely different chain of exploits to
       | re-hack the device, also is indicative of a massive organization
       | with truly abysmal levels of internal siloing.
       | 
       | Given that the researchers in question are Russians at Kaspersky,
       | this pretty much has to be the work of the NSA or maybe GCHQ.
       | 
       |  _Edit: misc other interesting bits from the talk: the malware
       | can enable ad tracking, and also can detect cloud iPhone service
       | hosting that 's often used by security researchers. The iOS/macOS
       | malware platform seems to have been in development for over a
       | decade and actually does ML on the device to do object
       | recognition and OCR on photos on-device, to avoid uploading image
       | bytes: they only upload ML generated labels. They truly went to a
       | lot of effort, but all that was no match for a bunch of smart
       | Russian students.
       | 
       | I'm not sure I agree with the speaker that security through
       | obscurity doesn't work, however. This platform has been in the
       | wild for ten years and nobody knows how long they've been
       | exploiting this hidden hardware "feature". If the hardware
       | feature was openly documented it'd have been found much, much
       | sooner._
        
         | sampa wrote:
         | or Apple just implemented this "API" for them, because they've
         | asked nicely
        
           | chatmasta wrote:
           | Or they have assets working at Apple... or they hired an ex-
           | Apple employee... etc.
           | 
           | That's the problem with this sort of security through
           | obscurity; it's only secure as long as the people who know
           | about it can keep it secret.
        
             | mike_hearn wrote:
             | I don't think hiring an ex-Apple dev would let you get the
             | needed sbox unless they stole technical documentation as
             | they left.
             | 
             | So it either has to be stolen technical docs, or a feature
             | that was put there specifically for their usage. The fact
             | that the ranges didn't appear in the DeviceTree is indeed a
             | bit suspicious, the fact that the description after being
             | added is just 'DENY' is also suspicious. Why is it OK to
             | describe every range _except_ that one?
             | 
             | But the really suspicious thing is the hash. What kind of
             | hardware interface does arbitrary DMA protected by a secret
             | but weak hash function? Is there any legitimate usage for
             | such a thing? I've never heard of such an interface before.
             | 
             | If it's a genuine backdoor and not a weird debugging
             | feature then it _should_ be rather difficult to add one
             | that looks like this without other people in Apple
             | realizing it 's there. Chips are written in source code
             | using version control, just like software. You'd have to
             | have a way to modify the source without anyone noticing or
             | sounding the alarm, or modifying it before synthesis is
             | performed. That'd imply either a very deep penetration of
             | Apple's internal network sufficient to inject backdoors
             | into hardware, or they have one or more agents.
             | 
             | This really shows how dangerous it is to intel agencies
             | when they decide to attack security professionals.
             | Attacking Kaspersky has led directly to them burning
             | numerous zero days including several that might have taken
             | fairly extreme efforts to set up. It makes you wonder what
             | is on these guy's iPhones that's considered so valuable.
             | Presumably, they were after emails describing more zero
             | days in other programs.
        
               | contingencies wrote:
               | APTs probably routinely identify and target such
               | developers. With multi-million dollar payouts for single
               | bugs and high state level actor attention, employee
               | profiling is clearly a known attack vector and internal
               | security teams probably now brief on relevant opsec. FWIW
               | the only Apple kernel developer I knew has somewhat
               | recently totally removed themselves from LinkedIn.
        
               | malfist wrote:
               | > But the really suspicious thing is the hash. What kind
               | of hardware interface does arbitrary DMA protected by a
               | secret but weak hash function? Is there any legitimate
               | usage for such a thing? I've never heard of such an
               | interface before.
               | 
               | Never attribute to malice that which can be attributed to
               | incompetence. There are plenty of examples in the wild of
               | going halfway with strong security, but halfway still
               | leaves the barn door open.
        
               | joe_the_user wrote:
               | _Never attribute to malice that which can be attributed
               | to incompetence. There are plenty of examples in the wild
               | of going halfway with strong security, but halfway still
               | leaves the barn door open._
               | 
               | That rule should only be applied in the normal world. The
               | world of security where you know bad actors are out there
               | trying to do stuff, it doesn't apply. And there are
               | examples of spy types injecting plans to go halfway with
               | security for their purposes - not that this proves the
               | origin of a given plan, incompetence is still one
               | possibility, it just returns to original point, that this
               | stuff is mysterious.
        
           | WhackyIdeas wrote:
           | I think the way it's done is that the code is presented to
           | them to use, Apple probably don't even code those parts
           | themselves.
        
         | aberoham wrote:
         | Also note the IoC script -- This script allows to scan iTunes
         | backups for indicator of compromise by Operation Triangulation.
         | https://github.com/KasperskyLab/triangle_check
        
         | black_puppydog wrote:
         | > If the hardware feature was openly documented it'd have been
         | found much, much sooner.
         | 
         | Well, the point of kerckhoff's principle is that it _should_
         | have been openly documented and then anyone lookindg at the
         | docs even pre-publication would have said  "we can't ship it
         | like that, that feature needs to go."
        
         | jsjohnst wrote:
         | Seems likely a compromise at the GPU or ARM side as equally
         | possible routes.
        
       | neilv wrote:
       | > _If we try to describe this feature and how the attackers took
       | advantage of it, it all comes down to this: they are able to
       | write data to a certain physical address while bypassing the
       | hardware-based memory protection by writing the data, destination
       | address, and data hash to unknown hardware registers of the chip
       | unused by the firmware._
       | 
       | Did the systems software developers know about these registers?
        
       | xvector wrote:
       | Does Lockdown Mode prevent agains this?
        
         | 542458 wrote:
         | I think lockdown drops most iMessage features, so I would
         | suspect the answer is yes. But as far as I can tell, lockdown
         | prevents use of mdm, so it might be a net negative for
         | security... instead, using the mdm policy that disables
         | iMessage might be preferable.
        
         | halJordan wrote:
         | It likely does. Lockdown mode stops most ios auto-processing
         | wrt to message attachments and this was delivered via a message
         | attachment.
        
       | londons_explore wrote:
       | What are the chances this MMIO register could have been
       | discovered by brute force probing every register address?
       | 
       | Mere differences in timing could have indicated the address was a
       | valid address, and then the hash could perhaps have been brute
       | forced too since it is effectively a 20 bit hash.
        
         | londons_explore wrote:
         | Looking at that sbox implementation, I can't believe it was
         | implemented as a lookup table in the hardware of the chip -
         | there must be some condensed Boolean expression that gives the
         | same result.
         | 
         | The fact the attackers didn't know that Boolean expression
         | suggests they reverse engineered it rather than had
         | documentation.
        
       | kevinwang wrote:
       | Wow, that's amazing. I wonder if attacker like this feel
       | unappreciated since they can't take credit for their work.
        
         | belter wrote:
         | Public key cryptography was developed in 1970s at GCHQ but that
         | was classified.
        
       | LanzVonL wrote:
       | Isn't the most obvious answer that Apple, like other US tech
       | firms such as Google, simply creates these wild backdoors for the
       | NSA/GCHQ directly? Every time one's patched, three more pop up.
       | We already know Apple and Google cooperate with the spy agencies
       | very eagerly.
        
         | jsjohnst wrote:
         | > We already know Apple and Google cooperate with the spy
         | agencies very eagerly.
         | 
         | The evidence clearly indicates otherwise...
        
           | Aerbil313 wrote:
           | Ahem, Snowden, PRISM anyone?
        
             | jsjohnst wrote:
             | Ahem, you mean you have a single example, from a decade
             | ago, one where Apple was hardly a key player (hence why
             | Apple didn't sign onto PRISM until half a decade after
             | Yahoo, Microsoft, Google, et all), as conclusive evidence
             | of "eagerness to partner with spy agencies", despite
             | numerous public cases where they've done the opposite...
             | got it!
        
           | freeflight wrote:
           | How so? Any competent intelligence service will not just
           | depend on the goodwill of a corporation to secure access to
           | assets and intelligence.
           | 
           | If they cooperate that's good and convenient, but that does
           | not mean the intelligence service will not set in place
           | contingencies for if the other side suddenly decides not to
           | play ball anymore.
        
         | freeflight wrote:
         | I consider that plausible with Google due to Google's funding
         | history [0], but Apple is afaik way less "influenced" and the
         | way this pwn was pulled off could also have been done by
         | compromising Apple's hardware supply chain and not Apple
         | itself.
         | 
         | Particularly considering how in the past Apple has been very
         | willing to be on the receiving end of negative headlines for
         | not giving US agencies decrypted access to iCloud accounts of
         | terrorist suspects, with Google I don't remember it ever having
         | been the target of such controversy, meaning they willingly
         | oblige with all incoming requests.
         | 
         | [0] https://qz.com/1145669/googles-true-origin-partly-lies-in-
         | ci...
        
       | guwop wrote:
       | Crazy!
        
       | londons_explore wrote:
       | Notice that the hash value for a data write of all zero's is
       | zero...
       | 
       | And for a single bit, the hash value is a single value from the
       | sbox table. That means this hash algorithm could reasonably have
       | been reverse engineered without internal documentation.
        
         | londons_explore wrote:
         | This 'smells' like a typical way to prevent memory writes to
         | random addresses accidentally triggering this hardware. Doesn't
         | look like it was intended as a security feature.
         | 
         | In fact, this is how I'd implement it if someone said to me it
         | was important that bugs couldn't lead to random writes. This
         | implementation also effectively prevents someone using this
         | feature whilst giving a buffer address they don't know the
         | contents of.
         | 
         | 10 bits of security is probably enough for that as long as you
         | reboot the system whenever the hash value is wrong. The
         | coresight debug functionality can totally reboot the system if
         | it wants to.
        
           | tedunangst wrote:
           | Like a CRC? I'm reminded of the the Broadcom compression
           | algorithm that required tedious reverse engineering, or a
           | look at the Wikipedia page with sample code.
        
       | DantesKite wrote:
       | Steve Weis on Twitter described it best:
       | 
       | "This iMessage exploit is crazy. TrueType vulnerability that has
       | existed since the 90s, 2 kernel exploits, a browser exploit, and
       | an undocumented hardware feature that was not used in shipped
       | software"
       | 
       | https://x.com/sweis/status/1740092722487361809?s=46&t=E3U2EI...
        
         | sweis wrote:
         | The video of the talk is online now too:
         | https://www.youtube.com/watch?v=7VWNUUldBEE
        
       | londons_explore wrote:
       | Coresight is not some backdoor - it's a debug feature of all ARM
       | CPU's. This looks like a necessary extension to coresight to work
       | with Apples memory protection stuff.
       | 
       | Even though no public documentation exists, I'm sure thousands of
       | Apple engineers have access to a modded gdb or other tooling to
       | make use of it.
        
       ___________________________________________________________________
       (page generated 2023-12-27 23:00 UTC)