[HN Gopher] Battering RAM - Low-cost interposer attacks on confi...
       ___________________________________________________________________
        
       Battering RAM - Low-cost interposer attacks on confidential
       computing
        
       Author : pabs3
       Score  : 124 points
       Date   : 2025-10-06 07:47 UTC (15 hours ago)
        
 (HTM) web link (batteringram.eu)
 (TXT) w3m dump (batteringram.eu)
        
       | schoen wrote:
       | I think I talked about this possibility with Bunnie Huang about
       | 15 years ago. As I recall, he said it was conceptually
       | achievable. I guess it's also practically achievable!
        
       | no_time wrote:
       | I find it reassuring that you can still get access to the data
       | running on _your own_ device, despite all the tens of thousands
       | of engineering hours being poured into preventing just that.
        
         | throawayonthe wrote:
         | I doubt you own hardware capable of any of the confidential
         | computing technology mentioned
        
           | kedihacker wrote:
           | Well microcontrollers can prevent you from repairing your own
           | device with DRM and secure enclaves
        
           | no_time wrote:
           | My 2017 bottom shelf lenovo has SGX whether I like it or not.
           | 
           | In current year you can't really buy new hardware without
           | secure enclaves[0], be it a phone, a laptop or server. Best
           | you can do is refuse to run software that requires it, but
           | even that will become tough when goverments roll out
           | mandatory software that depends on it.
           | 
           | [0]: unless you fancy buying nerd vanity hardware like a
           | Talos POWER workstation with all the ups and downs that come
           | with it.
        
             | RedShift1 wrote:
             | Pretty sure you can turn off SGX in the BIOS?
        
             | da768 wrote:
             | Intel killed SGX on consumer CPUs a while ago
             | 
             | https://news.ycombinator.com/item?id=31047888
        
               | heavyset_go wrote:
               | Intel TXT is another related trusted
               | execution/attestation/secure enclave feature, not sure
               | how prevalent that one is, though
        
       | Simple8424 wrote:
       | Is this making confidential computing obsolete?
        
         | Harvesterify wrote:
         | In their current form, AMD and Intel proposals never fulfilled
         | the Confidential Computing promises, one can hope they will do
         | better in their next iteration of SGX/TDX/SEV, but they were
         | always broken, by design.
        
         | dist-epoch wrote:
         | That's like saying a security vulnerability in OpenSSL/SSH is
         | making SSL/SSH obsolete.
        
           | JW_00000 wrote:
           | It's a bit more fundamental in my opinion. Cryptographic
           | techniques are supported by strong mathematics; while I
           | believe hardware-based techniques will always be vulnerable
           | against a sufficiently advanced hardware-based attack. In
           | theory, there exists an unbreakable version of OpenSSL
           | ("under standard cryptographic assumptions"), but it is not
           | evident that there even is a way to implement the kind of
           | guarantees confidential computing is trying to offer using
           | hardware-based protection only.
        
             | dist-epoch wrote:
             | Proof of existence does exist. Some Xbox variant has now
             | been unbroken (jailbroken) for more than 10 years. And not
             | for lack of trying.
             | 
             | Credit/debit cards with chips (EMV) are another proof of
             | existence that hardware-based protection can exist.
             | 
             | > It is not evident that there even is a way to implement
             | the kind of guarantees confidential computing is trying to
             | offer using hardware-based protection only.
             | 
             | Not in the absolute, but in the more than $10 mil required
             | to break it (atomic microscopes to extract keys from CPU
             | gates, ...), and that to break a single specific device,
             | not the whole class.
        
               | hansvm wrote:
               | As soon as a bad actor has a single key the entire class
               | is broken since the bad actor can impersonate that
               | device, creating a whole cloud of them if they want.
        
               | dist-epoch wrote:
               | You would not be able to use that cloud of impersonated
               | device online - Microsoft would see that device
               | connecting multiple times and ban it.
               | 
               | And the key would not allow you to jailbrake another
               | Xbox.
               | 
               | So at most you might be able to make a PC look like an
               | Xbox, but a PC is more expensive to start with.
               | 
               | So unclear exactly what you have accomplished.
        
       | fweimer wrote:
       | I'm kind of confused by AMD's and Intel's response. I thought
       | both companies were building technology that allows datacenter
       | operators to prove to their customers that they do not have
       | access to data processed on the machines, despite having physical
       | access to them. If that's out of scope, what is the purpose of
       | these technologies?
        
         | Harvesterify wrote:
         | Security theater, mostly.
        
         | hbbio wrote:
         | TEEs don't work, period.
         | 
         | FHE does (ok, it's much slower for now).
        
           | LPisGood wrote:
           | Why do you say TEEs don't work at all?
        
             | treyd wrote:
             | TEEs, as they're marketed, requires a true black box. True
             | black boxes do not exist, as a property of the rules of our
             | universe.
             | 
             | You can ALWAYS break them, it's just a matter of cost,
             | _even assuming_ they 're perfectly designed and have no
             | design/implementation flaws. And they're often not
             | perfectly designed, sometimes requiring no physical
             | hardware tampering.
        
               | rossjudson wrote:
               | The point of security efforts is to make an attacker's
               | life harder, not to construct perfect defenses (because
               | there's no such thing, as you've noted).
               | 
               | TEEs make attacker's lives harder. Unless you can find a
               | way to make your interposer _invisible_ and
               | _undetectable_ , the value is limited.
        
               | fpoling wrote:
               | Quantum mechanics with its non-copy property implies that
               | a true black box can be created.
        
         | davemp wrote:
         | I've always assumed it's a long term goal for total DRM
        
         | heavyset_go wrote:
         | Remote attestation of our personal devices, including
         | computers, the apps we run and the media we play on them.
         | 
         | The server side also has to be secure for the lock-in to be
         | effective.
        
         | mike_hearn wrote:
         | For Intel it's not out of scope, it's just that the specific
         | CPUs they attacked fall into a tech mid-point in which Intel
         | temporarily descoped bus interposers from the threat model to
         | pivot in the market towards encrypting much larger memory
         | spaces. From Alder Lake onwards they fixed the issue in a
         | different way from classic "client SGX" which had the most
         | cryptographically robust protections and is not vulnerable to
         | this attack, but which imposed higher memory access costs that
         | scaled poorly as size of protected RAM grew.
         | 
         | For AMD they just haven't invested as much as Intel and it's
         | indeed out of scope for them. The tech still isn't useless
         | though, there are some kinds of attacks that it blocks.
        
           | saurik wrote:
           | You've mentioned multiple times on this thread that Intel has
           | a fix for this in their latest CPUs, but I haven't seen that
           | called out anywhere else... I've only seen the idea that
           | latest CPUs use DDR5 (which also is true of AMD SEV-SNP's
           | EPYC 9005) and so happen to be too difficult (for now) for
           | either the teams of Battering RAM or WireTap?
        
       | matja wrote:
       | > No, our interposer only works on DDR4
       | 
       | Not surprising - even having 2 DDR5 DIMMs on the same channel
       | compromises signal integrity enough to need to drop the frequency
       | by ~30-40%, so perhaps the best mitigation at the moment is to
       | ensure the host is using the fastest DDR5 available.
       | 
       | So - Is the host DRAM/DIMM technology and frequency included in
       | the remote attestation report for the VM?
        
         | seg_lol wrote:
         | All of that info is faked. You should never trust a cloud vm.
         | That is why it is called "public cloud".
        
           | matja wrote:
           | The attestation report is signed by a key in the PSP
           | hardware, not accessible by any OS or software, which can
           | then be validated with the vendor's certificate/public-key.
           | If that can be faked, are you saying that those private keys
           | are compromised?
        
             | heavyset_go wrote:
             | I'm willing to bet if you ran terrorism-as-a-service.com on
             | a protected VM, it wouldn't be secure for long, and if it
             | really came down to it, the keys would be coughed up.
        
             | 1oooqooq wrote:
             | any key not controlled by you, is, by definition,
             | compromised.
             | 
             | what's so difficult to understand? really.
        
             | michaelt wrote:
             | _> If that can be faked, are you saying that those private
             | keys are compromised?_
             | 
             | As I understand it, the big idea behind Confidential
             | Computing is that huge American tech multinationals AWS,
             | GCP and Azure can't be trusted.
             | 
             | It is hardly surprising, therefore, that the
             | trustworthiness of huge American tech multinationals Intel
             | and AMD should also be in doubt.
        
         | Aurornis wrote:
         | Interposers exist for every type of memory.
         | 
         | We use them during hardware development to look at the
         | waveforms in detail well beyond what is needed to read the
         | bits.
         | 
         | The reason their interposer doesn't work with DDR5 is because
         | they designed it with DDR4 as the target, not because DDR5 is
         | impossible to snoop.
        
           | AlotOfReading wrote:
           | The mental image I'm getting from your description is a high
           | speed o-scope probe copy-pasted 80 times, which would
           | obviously be insane. But keysight docs show what looks like
           | an entirely normal PCB that literally interposes the BGA with
           | trace wires on every pin, which looks far too simple for a
           | multi GHz signal.
           | 
           | What do they actually look like and are there teardowns that
           | show the analog magic?
        
           | trebligdivad wrote:
           | They're not snooping, they're modifying the address
           | dynamically to cause aliasing.
        
       | commandersaki wrote:
       | I like how the FAQ doesn't really actually answer the questions
       | (feels like AI slop but giving benefit of the doubt), so I will
       | answer on their behalf, without even reading the paper:
       | 
       |  _Am I impacted by this vulnerability?_
       | 
       | For all intents and purposes, no.
       | 
       |  _Battering RAM needs physical access; is this a realistic attack
       | vector?_
       | 
       | For all intents and purposes, no.
        
         | JW_00000 wrote:
         | You're twisting their words. For the second question, they
         | clearly answer yes.
         | 
         | It depends on the threat model you have in mind. If you are a
         | nation state that is hosting data in a US cloud, and you want
         | to protect yourself from the NSA, I would say this is a
         | realistic attack vector.
        
           | commandersaki wrote:
           | I haven't twisted their words, they didn't actually answer
           | the question, so I gave my own commentary. _For all intents
           | and purposes_ , as in _practically speaking_ , this isn't
           | going to affect anyone*. The nation state threat is atypical
           | even to those customers of confidential computing, I guess
           | the biggest pool of users being those that use Apple
           | Intelligence (which wouldn't be vulnerable to this attack
           | since they use soldered memory in their servers and a
           | different TEE).
           | 
           | Happy to revisit this in 20 years and see if this attack is
           | found in the wild and is representative. (I notice it has
           | been about 20 years since cold boot / evil maid was published
           | and we still haven't seen or heard of it being used in the
           | wild (though the world has kind of moved onto soldered ram
           | for portable devices).
           | 
           | * They went to great lengths to provide a logo, a fancy
           | website and domain, etc. to publicise the issue, so they
           | should at least give the correct impression on severity.
        
             | Simple8424 wrote:
             | There is clearly a market for this and it is relevant to
             | those customers. The host has physical access to the
             | hardware and therefore can perform this kind of attack.
             | Whether they have actually done so is irrelevant. I think
             | the point of paying for confidential computing is knowing
             | they cannot. Why do you consider physical access not a
             | realistic attack vector?
        
               | commandersaki wrote:
               | _Why do you consider physical access not a realistic
               | attack vector?_
               | 
               | First we should be careful in what I said; I never said
               | physical access is unrealistic and certainly didn't say
               | this attack is not viable*. What I am saying is that this
               | is not a concern outside a negligible amount of the
               | population. They never will be affected as we have seen
               | with the case of Cold Boot, and all the other infeasible
               | fear mongering attacks. But sure, add it to your
               | vulnerability scanner or whatever when you detect
               | SGX/etc.
               | 
               | But why should this not be a concern for an end user that
               | may have their data going through cloud compute or a
               | direct customer? It comes down to a few factors: scale,
               | insider threats and/or collusion, or straight up cloud
               | providers selling backdoored products.
               | 
               | Let's go in reverse. Selling backdoored products is an
               | instant way to lose goodwill, reputation, lose your
               | customer base, with little to no upshot if you succeed in
               | the long term. I don't see Amazon, Oracle, or whoever
               | stooping this low. A company with no or low reputation
               | will not even make a shortlist for CCC (confidential
               | cloud compute).
               | 
               | Next is insider threats. Large cloud providers have
               | physical security locked down pretty tight. Very few in
               | an organisation know where the actual datacentres are.
               | Cull that list by 50% for those that can gain physical
               | access. Now you need to have justification for why you
               | need access to the physical machine (does the system have
               | failed hardware or bad RAM) that you need to _target_ **.
               | And so on and so forth. Then there is physical monitoring
               | of capturing a recording of you performing the act and
               | the huge deterrent of not losing your cushy job and being
               | sentenced to prison.
               | 
               | Next collusion: so we consider a state actor/intelligence
               | community compelling a cloud provider to do this (but it
               | could be anyone such as an online criminal group or a
               | next door neighbour). This is too much hassle and
               | headache in which they would try to get more
               | straightforward access. But the UK for example, after
               | exhausting all other ways of getting access data to a
               | target, could supply a TCN to a cloud provider to deploy
               | these interposers for a target, they would still need to
               | get root access to the system. Reality is this would be
               | put in the too hard basket; they would probably find
               | easier and more reliable ways to get the data they seek
               | (which is more specific than random page accesses).
               | 
               | Finally I think the most important issue here is scale.
               | There's a few things I think about when I think of scale:
               | first is the populous that should generally be worried
               | (which I stated earlier is a negligible amount). There's
               | the customers of CCC. Then there's the end users that
               | actually use CCC. There's also the number of how many
               | interposers can be deployed surreptitiously. At the
               | moment, very few services use CCC, the biggest is Apple
               | PCC and WhatsApp private processing for AI. Apple is not
               | vulnerable for a few reasons. Meta does use SEV-SNP, and
               | I'm sure they'd find this attack intriguing as a
               | technically curiousity, but won't change anything they do
               | as they're likely to have tight physical controls and
               | separate that with the personnel that have root access to
               | the machines. But outside of these few applications which
               | are unlikely to be targetted, there's nascent use of CCC,
               | so there's negligible chance the general public will be
               | even exposed to the possibility of this attack.
               | 
               | I've ignored the supply chain attack scenario which will
               | be clear as you read what follows.
               | 
               | A few glaring issues with this attack:
               | 
               | 1. You need root on the system. I have a cursory
               | understanding of the threat model here in that the
               | OS/hypervisor is considered hostile to SGX, but if you're
               | trying to get access to data and you control the
               | OS/hypervisor, why not just subvert the system at that
               | level rather than go through this trouble?
               | 
               | 2. You need precise control of memory allocation to alias
               | memory. Again, this goes back to my previous point, why
               | would you go to all this trouble when you have front door
               | access.
               | 
               | (Note I eventually did read the paper, but my commentary
               | based on the website itself was still a good indicator
               | that this affects virtually noone.)
               | 
               | * The paper talks about _feasibility_ of the attack when
               | they actually mean how _viable_ it is.
               | 
               | ** You can't simply reap the rewards of targeting a
               | random machine, you need root access for this to work.
               | Also the datacentre technicians at these cloud companies
               | usually don't have the information apriori of which
               | customer maps to which physical server.
        
             | JW_00000 wrote:
             | They answer the second question quite clearly in my
             | opinion:                   It requires only brief one-time
             | physical access, which is realistic in cloud environments,
             | considering, for instance:              * Rogue cloud
             | employees;         * Datacenter technicians or cleaning
             | personnel;         * Coercive local law enforcement
             | agencies;         * Supply chain tampering during shipping
             | or manufacturing of the memory modules.
             | 
             | This reads as "yes". (You may disagree, but _their_ answer
             | is "yes.")
             | 
             | Consider also "Room 641A" [1]: the NSA has asked big
             | companies to install special hardware on their premises for
             | wiretapping. This work is at least proof that a similar
             | request could be made to intercept confidential compute
             | environments.
             | 
             | [1] https://en.wikipedia.org/wiki/Room_641A
        
               | commandersaki wrote:
               | _This reads as "yes". (You may disagree, but _their_
               | answer is "yes.")_
               | 
               | Ah yes, so I bet all these companies that are or were
               | going to use confidential cloud compute aren't going to
               | now, or kick up a fuss with their cloud vendor. I'm sure
               | all these cloud companies are going to send vulnerability
               | disclosures to all confidential cloud compute customers
               | that their data could potentially be compromised by this
               | attack.
        
       | munchlax wrote:
       | Dupe of: https://news.ycombinator.com/item?id=45439286
       | 
       | 11 points by mici 4 days ago
        
       | rhodey wrote:
       | I hope people dont give up on TEE, see AWS Nitro
       | 
       | The AWS business is built on isolating compute so IMO AWS are the
       | best choice
       | 
       | I've built up a stack for doing AWS Nitro dev
       | 
       | https://lock.host/
       | 
       | https://github.com/rhodey/lock.host
       | 
       | With Intel and AMD you need the attestation flow to prove not
       | only that you are using the tech but you need to attest to who is
       | hosting the CPU
       | 
       | With Amazon Nitro always Amazon is hosting the CPU
        
       | addaon wrote:
       | This seems pretty trivial to fix (or at least work around) by
       | adding an enclave generation number to the key initialization
       | inputs. (They mention that the key is only based on the physical
       | address, but surely it has to include CPUID or something similar
       | as well?) Understood that this is likely hardware key generation
       | so won't be fixed without a change, and that persistent
       | generation counters are a bit of a pain... but what else am I
       | missing?
        
         | Veliladon wrote:
         | Need to go Apple style where the AES engine is on die. Only the
         | AES engine and the Secure Enclave know the decryption keys. The
         | CPU doesn't know the decryption key. Nothing is sent in clear
         | text over the bus.
        
           | mike_hearn wrote:
           | That's how it works already. The memory is encrypted.
           | However, the SGX/SEV model is a very powerful and flexible
           | one - different entities who don't trust one another can
           | share the same hardware simultaneously. If you encrypt all of
           | RAM under a single key, then you can start up a malicious
           | enclave, do some writes - which the CPU will encrypt -
           | capture those writes and redirect them to the memory of a
           | different enclave, and now you can overwrite the memory of
           | that other enclave with your own cleartext.
           | 
           | That such attacks are possible was known from the start. What
           | they're doing here is exploiting the fact that Intel
           | (knowingly!) enabled some hardware attacks on SGX in order to
           | allow enclaves to scale up to much larger amounts of RAM
           | consumed.
        
       | exabrial wrote:
       | Physical access owns. If the computer can't trust its components
       | what can it do?
        
       | acidburnNSA wrote:
       | Damn. I was hoping that confidential compute could allow nuclear
       | reactor design work (export controlled, not classified) to go
       | into the public cloud and avoid the govcloud high premium costs.
       | But this kind of takes the wind out of the idea.
        
       | mike_hearn wrote:
       | Not a great paper, hence why the "advisories" are so short. All
       | they've done is show that some products meet their advertised
       | threat model. Intel has a solution: upgrade your CPU. AMD do not.
       | Once again Intel are ahead when it comes to confidential
       | computing.
       | 
       | The story here is a little complex. Some years ago I flew out to
       | Oregon and met the designers of SGX. It's a good design and it's
       | to our industries shame that we haven't used it much, as tech
       | like this can solve a lot of different security and privacy
       | problems.
       | 
       | SGX as originally designed was not attackable this way. This kind
       | of RAM interposer attack was anticipated and the hardware was
       | designed to block it by using memory integrity trees, in other
       | words, memory was not only being encrypted by the CPU on the fly
       | (cheap) but RAM was also being hashed into a kind of Merkle tree
       | iirc which the CPU would check on access. So even if you knew the
       | encryption key, you could not overwrite RAM or play games with
       | it. It's often overlooked but encryption doesn't magically make
       | storage immutable. An attacker can still overwrite encrypted
       | data, delete parts, replay messages, redirect your write requests
       | or otherwise mess with it. It takes other cryptographic
       | techniques to block those kinds of activities, and "client SGX"
       | had them (I'm not sure SEV ever did).
       | 
       | This made sense because SGX design followed security best
       | practices, namely, you should minimize the size of the trusted
       | computing base. More code that's trusted = more potential for
       | mistakes = more vulnerabilities. So SGX envisions apps having
       | small trusted "enclaves", sort of like protected kernels, that
       | untrusted code then uses. Cryptography ties the whole thing
       | together. In a model like this an enclave doesn't need a large
       | amount of RAM because the bulk of the app is running outside of
       | the TCB.
       | 
       | Unfortunately, at this point Intel discovered a sad and
       | depressing but fundamental truth about the software industry: our
       | tolerance for taking on additional complexity to increase
       | security rounds to zero, and the enclave programming model is
       | complex. The number of people who actually understand how to use
       | enclaves as a design primitive can probably fit into a single
       | large conference room. The number of apps that used them in the
       | real world, in a way that actually met some kind of useful threat
       | model, I'm pretty sure is actually near zero [1].
       | 
       | This isn't the fault of SGX! From a theoretical perspective, it
       | is sound and the way it was meant to be used is sound. But
       | actually exploiting it properly required more lift than the
       | software industry could give. For example, to obtain the biggest
       | benefits (SaaS you can use without trusting it) would have
       | required some tactical changes to web browsers, changes to
       | databases, changes to how such apps are designed and so on.
       | Nobody tried to coordinate such changes and Intel, being a
       | business, could not afford to wait for a few decades to see if
       | anyone picked up the ball on that (their own software engineering
       | efforts were good as far as they went but not ambitious enough to
       | pull off the vision).
       | 
       | Instead what happened is that potential customers said to them
       | (and AMD): look, we want extra security, but we don't want to
       | make any effort. We want to just run containers/VMs in the cloud
       | and have them be magically secure. Intel looked at what they had
       | and said OK, well, um, I guess we can maybe run bigger apps
       | inside enclaves. Maybe even whole VMs. So they went away and did
       | a redesign, but then they hit a fundamental physics problem: as
       | you expand the amount of encrypted and protected RAM the Merkle
       | tree protecting its integrity gets bigger and bigger. That means
       | every cache miss has to recursively do a tree walk to ensure the
       | data read from RAM is correct. And that kills performance. For
       | small enclaves the tree is shallow and the costs aren't too bad.
       | For big enclaves, well ... the performance rapidly becomes
       | problematic, especially as the software inside expects to be
       | running at full speed (as we are no longer designing with SGX in
       | mind now but just throwing any old stuff into the protected
       | space).
       | 
       | So Intel released a new version gamely called "scalable SGX"
       | which scaled by removing the memory integrity tree. As the point
       | of that tree was to stop bus interposer attacks, they provided an
       | updated threat model that excluded them. The tech is still useful
       | and blocks some attacks (e.g. imagine a corrupted developer on a
       | cloud hypervisor team). But it was no longer as strong as it once
       | was.
       | 
       | Knowing this, they set about creating yet another memory
       | encryption tech called TME-MK which assigns each memory page its
       | own unique encryption key. This prevented the kind of memory
       | relocation attacks the "Battering RAM" interposer is doing. They
       | also released a new tech that is sort of like SGX for whole
       | virtual machines, formally giving up on the idea the software
       | industry would ever actually try to minimize TCBs. Sad, but there
       | we go. Clouds have trusted brands and people aren't bothered by
       | occasional reports of global root exploits in Azure. It would
       | take a step change event to get more serious about this stuff.
       | 
       | [1] You might think Signal would count. Its use of SGX does help
       | to reduce the threat from malicious or hacked cloud operators,
       | but it doesn't protect against the operators of the Signal
       | service themselves as they control the client.
        
         | michaelt wrote:
         | _> This made sense because SGX design followed security best
         | practices, namely, you should minimize the size of the trusted
         | computing base. More code that 's trusted = more potential for
         | mistakes = more vulnerabilities. So SGX envisions apps having
         | small trusted "enclaves", sort of like protected kernels, that
         | untrusted code then uses._
         | 
         | Let's say I was Google building gmail. What would I put in the
         | 'secure enclave' ?
         | 
         | Obviously the most important thing is the e-mail bodies, that's
         | the real goldmine. And of course the logins / user session
         | management. The SSL stuff naturally, the certificate for
         | mail.google.com is priceless. And clearly if an attacker could
         | compromise the server's static javascript it'd be game over,
         | security-wise.
         | 
         | At that point, is there anything left _outside_ the secure
         | enclave?
        
         | saurik wrote:
         | I think you _might_ be confusing Battering RAM with the recent
         | attacks Heracles and Relocate+Vote? Regardless, these pages are
         | not being  "relocated": they are being remapped to a different
         | physical location not to use a different key, but to be able to
         | read/write the ciphertext itself by way of a separate
         | _unencrypted_ address.
         | 
         | TME-MK thereby doesn't do much against this attack. I mean, I
         | guess it slightly improves one of the attacks in the paper (as
         | Intel's CPU was especially bad with the encryption, using the
         | same key across multiple VMs; AMD did not have this issue), but
         | you can use Battering RAM to just get a ciphertext sidechannel
         | (similar to WireTap).
         | 
         | Like, think about it this way: the real attack here is that,
         | for any given block of memory (and these blocks are tiny: 16
         | bytes large), the encryption key + tweak doesn't change with
         | every write... this is the same for TME and TME-MK. This means
         | that you can find 16 bytes that are valuable, characterize the
         | possible values, and dump a key.
        
       ___________________________________________________________________
       (page generated 2025-10-06 23:01 UTC)