[HN Gopher] Lion: A formally verified, 5-stage pipeline RISC-V core
       ___________________________________________________________________
        
       Lion: A formally verified, 5-stage pipeline RISC-V core
        
       Author : varbhat
       Score  : 212 points
       Date   : 2021-03-04 11:17 UTC (11 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | vzaliva wrote:
       | The term "formally verified" could be misleading. Some people
       | assume it means "100% bug free". Whenever someone claims
       | something is formlally verified one should ask what properties
       | were verified exactly. In this work the approach they use
       | (bounded model checking) _could_ find some bugs on a subset of
       | RISCV archictecure they formalized. I recommend looking at their
       | slides for better understanding of the scope of the work:
       | http://www.clifford.at/papers/2017/riscv-formal/slides.pdf
       | Nevertheless it is definetely a very impressive work and
       | pracrically useful.
        
         | mhh__ wrote:
         | There are many (probably apocryphal) tales of "formally"
         | verified systems being turned on then immediately crashing for
         | this very reason
        
         | capableweb wrote:
         | In the end it's up to the reader to understand what it means.
         | Many think "100% test coverage" is something you should strive
         | towards as they think it means less bugs while in reality it
         | just means that the test runner at one point or another
         | accessed that line of code.
        
       | gchadwick wrote:
       | Looks like an interesting project, though I would note they've
       | just done the 'easy' part so far. As far as I can see it's base
       | RV32I no CSRs, exception/interrupt support, PMP or other MMU.
       | These are the features where the bugs tend to lie and also
       | complicate your formal verification.
       | 
       | Still you have to start somewhere, I will be interested to see
       | their progress as they tackle the things listed above.
        
         | tachyonbeam wrote:
         | I'm kind of skeptical of formal verification. For instance, it
         | wouldn't have prevented bugs like spectre or meltdown. It can
         | only tell you that your implementation matches some spec, but
         | your spec can still be incomplete or buggy. At the end of the
         | day, there's no substitute for extensive real-world testing.
        
           | sanxiyn wrote:
           | Formal verification catches a lot of bugs, some of which
           | won't be caught by usual testing. It's still valuable even if
           | it doesn't catch all bugs. I just think it as a more
           | efficient way to run exhaustive testing.
        
           | agentultra wrote:
           | It's one tool of many. Microsoft used it to catch an error in
           | the design of the memory controller in the XBox360 before
           | launch. They also use it in the design of the consistency
           | levels in CosmosDB which are tied directly to their SLAs.
           | Hardware has a high turn-around time between iterations so
           | being able to understand the design more clearly and catch
           | errors earlier is a great investment.
        
           | aseipp wrote:
           | This is such a funny post because formal verification
           | techniques tend to see a lot of success in the hardware
           | world, from lightweight ones to heavy ones, much more so than
           | software, because many behaviors can be bounded over some
           | finite amount of time (or inductively proven over all time,
           | or equivalence checking, etc), and because hardware designs
           | have _massive_ incentives to get it right the first time and
           | not blow a billion dollars on a mistake.
           | 
           | If you write the basic cookie cutter "but but but, it's all
           | just a spec, so we can't ever know anything!!!" it should be
           | a requirement to submit a 5 page essay describing the
           | algorithms/theory behind formal verification techniques, so
           | they can prove they understand them, as opposed to only
           | "understanding" how to make cookie-cutter bait posts.
        
             | Symmetry wrote:
             | The most intimidating debugging scenario I've ever heard
             | described was a physical register leak in a renamer.
        
               | a1369209993 wrote:
               | I mean, strictly speaking that's just ('just') a memory
               | leak, but still, yikes.
        
             | DSingularity wrote:
             | Ouch. Don't be so harsh. Your point is valid: specification
             | bugs are definitely important to prevent. But OP question
             | is also valid: what about sub-specification bugs?
             | 
             | No need to roast him.
        
               | thechao wrote:
               | GGP's post is _classic_ HN middlebrow. Clearly someone
               | who  "knows what's best" for HW, but has never been
               | involved in a major HW product, ever.
               | 
               | Serious production HW is formal _all over_. GGP 's
               | opinion is sort of like going to Google's search-engine
               | group and saying that internet connectivity is a bad idea
               | because it'll never fully capture human emotion. It's not
               | even wrong.
        
               | random5634 wrote:
               | My partner did hardware stuff. I couldn't BELEIVE how
               | much time they spent in verification space (and how
               | annoying it seemed and retro the tooling appeared!). Much
               | different than programming where I can iterate 100x per
               | day.
        
               | dnautics wrote:
               | verification is not always formal, though - it's often
               | "fuzz testing" (I have done this).
        
               | [deleted]
        
               | [deleted]
        
           | dw-im-here wrote:
           | This is the hardware equivalent of pretending javascript is
           | fine because having types won't stop all bugs.
        
           | gchadwick wrote:
           | I'm not sure we're going to see formal proof totally replace
           | standard verification any time soon however when combined
           | with more traditional verification techniques it's very
           | effective. It can find interesting bugs very quickly that
           | other methods may have taken significant cycles to find. As
           | it approaches the problem very differently you get a
           | different set of bugs found from it. It's also great for
           | early bring-up work. With a handful of constraints you can
           | start looking at least some aspects of design behaviour
           | without needing an extensive testbench. When I was working on
           | L1 memory system design at arm I used formal tooling
           | extensively to bring up parts of the memory system before the
           | full testbench was available.
           | 
           | You may start to see formal only verification on
           | smaller/simpler blocks. E.g. imagine some bus adapter, with a
           | decent set of formal properties that specify the bus
           | functionality you may have to do very little work to verify
           | it with formal tools.
           | 
           | There's also the use of SVAs within standard verification.
           | This isn't formal verification but if you've done any formal
           | work you end up with a bunch of new checks you can run in
           | your standard verification flows too, again finding bugs more
           | quickly. You can get nice feedback between the two, develop
           | constraints to enable formal testing, those constraints
           | become assertions in some other verification flow, it finds a
           | violation of the constraint, you realise it needs adapting to
           | cover some extra behaviour and your formal environment is
           | improved (in effect you've found your spec bug).
           | 
           | View it as another tool in the toolbox, one that is becoming
           | every more necessary as hardware designs become ever more
           | complex.
        
           | __s wrote:
           | Code Complete gets into the idea that you want multiple
           | testing methods since they tend to have complementary
           | coverage
           | 
           | Reducing bugs to the spec has the same benefits as Rust
           | reducing memory safety bugs to unsafe blocks
           | 
           | & to your point: Spectre & Meltdown took decades for real
           | world testing to unearth. So now that we know them, it'd be
           | nice to come up with a formalization in the spec to address
           | observable information leaks so that future chips have
           | reduced attack surface in the space of spec implementations
           | 
           | It's hard to prove you're safe from novel threats, but let's
           | not let that stop us from proving we're safe from old tricks
        
             | tachyonbeam wrote:
             | Fair enough.
        
       | herodoturtle wrote:
       | "RISC architecture is gonna change everything"
       | 
       | :)
        
         | msla wrote:
         | "Posted from my ARM-based cell phone."
         | 
         | Or, hell, some ARM-based Mac, these days.
        
       | fctorial wrote:
       | What is this project? Can anyone ELI5?
        
         | addaon wrote:
         | A core is the part of a processor that actually runs
         | instructions -- a processor consists of one or more cores,
         | peripherals, etc. Software consists of a sequence of
         | instructions ("add A and B") that, together, accomplish a goal.
         | But even perfect software is only correct if the core that it
         | runs on is correct. Historically, there have been a small
         | number of high-profile correctness issues in various processor
         | cores (e.g. Intel's Pentium FDIV bug), and almost every
         | processor has a number of small correctness issues documented
         | in errata -- or, not yet discovered. In designing and building
         | a processor, correctness is important, and hard -- generally,
         | much more time is spent testing and demonstrating correctness
         | of a core than actually designing it.
         | 
         | Formal verification aims to not just test and demonstrate
         | correctness, but prove it. That is, under certain assumptions,
         | one can prove that, for example, the actual transistors used to
         | implement "add A and B", when connected in the intended way,
         | have the same semantics ("do the same thing") as "add A and B".
         | 
         | In the extreme case, formal methods can replace testing. In
         | practice, they can replace some portion of testing; but those
         | assumptions that they're built on can be a bit shaky. Formal
         | methods can also be /hard/ -- it can take more time to prove
         | something correct than to just test it thoroughly enough to
         | convince everyone. But, when done right, it does lead to higher
         | confidence overall.
        
       | seanmclo wrote:
       | I've been looking for an alternative hardware description
       | language to Verilog/SystemVerilog because they're not very
       | readable languages. But after skimming this source code, my
       | initial thought is that I hope Haskell doesn't take off. This is
       | extremely difficult to read.
       | 
       | Maybe I just don't know Haskell well enough, though.
        
         | jand wrote:
         | I while back i came across SystemC [1]. Have a look, even if
         | C++ might be perceived as a joke in a thread about formally
         | verified RISC-V.
         | 
         | [1] https://en.wikipedia.org/wiki/SystemC
        
           | UncleOxidant wrote:
           | You're going to have a hard time synthesizing SystemC. Yes,
           | there are tools available, but they cost a lot of $$$. And
           | then there are the error messages you get from SystemC - if
           | you like sifting through pages of C++ template error messages
           | to find your mistake, you'll love SystemC.
        
         | chrsw wrote:
         | I feel like I'm a similar boat as you. I came into the ASIC
         | world from the EE side of things: digital design, VLSI layout
         | and traditional HDLs. I never learned functional programming so
         | the entire paradigm of FP is opaque to me. Verilog and VHDL
         | have their problems but at least I can get somewhere with them.
         | It's up to me to learn these new HDL technologies because it
         | appears this is where the industry is headed. I think starting
         | with programming language theory and going from there will help
         | with getting my head around technologies like Chisel and Clash.
        
         | mhh__ wrote:
         | Bluespec is Haskell underneath SystemVerilog on top and it
         | seems quite nice.
         | 
         | It's open source as of about a year ago
        
         | a1369209993 wrote:
         | > But after skimming this source code, my initial thought is
         | that I hope Haskell doesn't take off. This is extremely
         | difficult to read.
         | 
         | > Maybe I just don't know Haskell well enough, though.
         | 
         | Speaking as someone who knows Haskell fairly well (and likes
         | it), and has used Verilog at least a bit, I agree - it looks
         | like they all the worst parts of Verilog (especially the
         | procedural parts), translated them into Haskell, and abandoned
         | the occasional redeeming qualities.
        
         | jeff_ciesielski wrote:
         | (From someone who is working on a rv32im implementation in
         | Clash)
         | 
         | I really love writing RTL in haskell when compared to
         | verilog/vhdl, but as a language I think it suffers from the
         | same thing a lot of other languages do: too many ways to do the
         | same thing. Mix that with a language that encourages meta-
         | programming and you've got yourself a recipe for every complex
         | haskell project basically becoming it's own little DSL. It's
         | also often made worse because so much haskell is written by
         | type-theorists and mathematicians churning out symbol-soup
         | without a thought for the rest of us plebs.
         | 
         | IMO this is actually pretty readable and the implementation is
         | stitched together nicely. There are some haskell/ml-isms like
         | lenses/monad transformers/partial functions sprinkled in there
         | that complicate a casual read-through, but if you've got a
         | grasp on those most of this is reasonably clear.
         | 
         | It isn't the most complex beast (as others have pointed out, it
         | skips things like the Zicsr and M extensions which add
         | significant complexity) but it could serve very well as say, a
         | companion core to some more complex piece of hardware. Perhaps
         | one that requires reconfigurable logic that would be
         | impractical in silicon but doesn't require realtime interrupts
         | or fast math?
        
         | phkahler wrote:
         | Have you looked at chisel? Or whatever it is the BROOM core is
         | written in.
        
         | GregarianChild wrote:
         | The RISC-V ecosystem is quite fond of Chisel [1]. A new kid on
         | the block is LLHD [2]. There are numerous others.
         | 
         | [1] https://github.com/lowRISC/chisel
         | 
         | [2] https://llhd.io/
        
           | gpanders wrote:
           | I want Chisel to succeed so badly. I'm so sick and tired of
           | writing VHDL. Verilog is no better.
           | 
           | Also it looks like LLHD is perhaps analogous to LLVM in that
           | they offer an intermediate representation instead of
           | providing a language that you code in yourself. Chisel also
           | offers this via FIRRTL [1]. I can't decide whether to be
           | excited about the fact that there are multiple ideas in this
           | space or frustrated that these disparate teams aren't simply
           | collaborating.
           | 
           | [1]: https://github.com/chipsalliance/firrtl
        
             | GregarianChild wrote:
             | There is a lot of resistance to dropping Verilog among
             | working processor designers in my experience.
             | Interestingly, LLHD is being folded into CIRCT [1] which is
             | part of LLVM.
             | 
             | [1] https://github.com/circt
        
             | spamizbad wrote:
             | You should look into
             | Bluespec.https://en.wikipedia.org/wiki/Bluespec
        
         | MrRadar wrote:
         | Robert Baruch on Youtube is building a RISC-V CPU core of
         | comparable complexity to the one in the OP using nMigen which
         | is an HDL based on Python and it looks like a very nice tool to
         | work with relative to other HDLs I've seen (though I've never
         | had to actually use an HDL myself before so I may not be the
         | best judge). Here's a playlist of the video series, with the
         | earlier videos basically being an introduction to nMigen:
         | https://www.youtube.com/watch?v=YgXJf8c5PLo&list=PLEeZWGE3Pw...
         | He's got another series that's focused more on the HDL itself
         | building a (relatively much simpler) 6800 CPU core:
         | https://www.youtube.com/watch?v=85ZCTuekjGA&list=PLEeZWGE3Pw...
        
         | UncleOxidant wrote:
         | After playing with some of the alternative HDLs (and even
         | trying to make my own at one point) I've come back to just
         | coding the design in VHDL (which I prefer over
         | Verilog/SystemVerilog, YMMV) and using cocotb
         | (https://github.com/cocotb/cocotb) for creating testbenches. I
         | think in a lot of cases effective use of VHDL generate
         | statements can do a lot of the things these alternative HDLs
         | are trying to do anyway. The alternative HDLs try to shoehorn
         | HDL semantics (everything happens in parallel) into languages
         | that are serial and it ends up being awkward and not any easier
         | to read.
         | 
         | Get comfortable with an HDL. Use something like cocotb to
         | generate testbenches.
        
         | aseipp wrote:
         | It's give and take. I've been writing Haskell for a long time,
         | and I think just reading slowly is a big help in general when
         | dealing with any Haskell code not written by me. This is true
         | for everything but I find it especially true here. There are
         | stylistic choices concerning the code base that some people
         | would quibble over (please nobody start a flame war over 'lens'
         | thank you) that make it a little more opaque to non-Haskell
         | users though, I think. Like, I wouldn't blame anyone for
         | thinking:                   scribe toMem . First . Just .
         | InstrMem =<< dePC <<~ use fetchPC
         | 
         | just doesn't read very well if they don't know it Haskell,
         | though I admit it's perfectly readable and understandable to
         | me, and I think could be explained to a novice with a little
         | help.
         | 
         | If you want an alternative to SystemVerilog, another
         | alternative is Bluespec. It's higher level than SV, but much
         | more productive and easier to get right, too. (Incidentally,
         | Bluespec is _also_ a kind of Haskell, but it has two different
         | kinds of syntax, one inspired by Haskell, and one inspired by
         | SystemVerilog...)
        
           | marcosdumay wrote:
           | That line does make me wish for an IDE that automatically
           | display parenthesis around the operators that aren't (.) or
           | ($)...
           | 
           | But I remember that the precedence of lenses compared to
           | binding operators is really important. It's just that I
           | didn't write any lenses for a long time.
        
       | marcodiego wrote:
       | What is the chace of RISC-V becoming and x86 or arm alternative
       | free from binary blobs and (IME|PSP)-like traps?
        
         | spamizbad wrote:
         | I think RISC-V, without some additional extension(s) and maybe
         | some rework, will have trouble scaling the levels necessary to
         | be competitive performance-wise with smartphone devices or data
         | centers the way ARM v8 is. Erin Shepherd did a good write-up
         | why:
         | https://gist.github.com/erincandescent/8a10eeeea1918ee4f9d99...
         | 
         | David Chisnall's critique is also worth a read:
         | https://lobste.rs/s/icegvf/will_risc_v_revolutionize_computi...
         | 
         | With that said for applications that are sensitive to power or
         | transistor counts it wouldn't surprise me if it takes over the
         | low to midrange MCU market 10 years from now.
        
         | Symmetry wrote:
         | In the short to medium turn expect RISC-V to make far deeper
         | inroads against embedded cores where roughly all code is custom
         | than against application processors with their need for binary
         | compatibility and countless optimized libraries.
        
         | matthewmacleod wrote:
         | Probably minimal. There is nothing preventing an x86 or ARM CPU
         | without these today, and CPU architecture has ~zero impact on
         | the business decision to rely on binary blobs or proprietary
         | system controllers.
        
           | TrueDuality wrote:
           | There is also nothing preventing someone making a hardware
           | RISC-V core from including additional hardware, custom
           | instructions, or proprietary blobs.
        
         | coldtea wrote:
         | Slim.
         | 
         | Somebody will have to build from high end designs, spend
         | billions doing so, and those would be the same commercial
         | companies, which (if adopt RISC-V) will add their own
         | proprietary spin.
         | 
         | https://twitter.com/marcan42/status/1366631459000258565
         | 
         | https://twitter.com/marcan42/status/1366631470110965760
         | 
         | https://twitter.com/marcan42/status/1366631471188828164
        
         | rjsw wrote:
         | The BeagleV [1] looks much like many ARM dev boards.
         | 
         | It will probably still have a binary blob for the GPU though,
         | they are proposing to use an Imagination design.
         | 
         | [1] https://beagleboard.org/beaglev
        
         | Narishma wrote:
         | Maybe in a couple of decades.
        
       | UncleOxidant wrote:
       | Hadn't heard of the VELDT board that this design targets, but it
       | looks like it's based on the Lattice ICE40 which means you can
       | use the open source yosys/symbiflow tools.
        
       | ACAVJW4H wrote:
       | Is anyone aware of a manufactured and distributed fully open-
       | source risc-v CPU? Maybe one using skywater-pdk
        
         | gchadwick wrote:
         | Take a look at the projects going into the skywater-pdk shuttle
         | https://efabless.com/open_mpw_shuttle_project_mpw_one there are
         | multiple RISC-V CPUs. Manufacturing is currently in progress.
         | 
         | 'Distributed' is the tricky point, what counts as 'Distributed'
         | in your eyes? The Skywater MPW is producing ~50 devices for
         | each project I think, I'm sure some of those containing RISC-V
         | CPUs will be distributed around to a few people but there won't
         | be easy general availability of just being able to buy one.
        
           | baybal2 wrote:
           | Skywater can do much more, but the stupid "harness" eats tons
           | of space, a majority of it.
        
             | gchadwick wrote:
             | > but the stupid "harness" eats tons of space, a majority
             | of it.
             | 
             | I can't blame them for wanting the harness. By having a
             | constant across everything that tapes-out where there are
             | issues with dead or partially working chips it should be
             | easier to track down what's going wrong. The specific
             | project or something more general with the library, process
             | or tools. There's lot of designs from people new to ASIC
             | flows or without much experience in them, the harness
             | allows efabless to help support people with bring-up.
             | 
             | Yes for an experienced ASIC designer it may be an annoyance
             | but you are getting entirely free tape-outs of this. If you
             | want to pay efabless or another company for a tape-out you
             | can do your own thing with skywater PDK.
             | 
             | As for the majority of space are you sure? Take this
             | project with layout photo: https://efabless.com/projects/34
             | to my eye the 'project' area which is the larger of the two
             | distinct rectangular regions (the bottom smaller one is the
             | harness) looks to be getting the lion's share of the area.
        
             | [deleted]
        
       | amelius wrote:
       | Does this verify for side-channel attacks?
        
         | GregarianChild wrote:
         | I'm a bit reluctant to reply without having looked at the
         | article in detail, but I feel confident that the answer must be
         | negative!
         | 
         | Why?
         | 
         | Because the very concept of side-channel depends on attacker
         | capability! For example, does your attacker have physical
         | access to the processor or not? With physical access you can
         | exploit channels like power consumption through differential
         | power analysis, or shoot laser pulses at target transistors to
         | flip them and induce the processor to leak secrets. OTOH,
         | without physical access, those channels don't meaningfully
         | exist and you need to rely on, for example, speculation failure
         | attacks and exfiltration via cache timing. So what counts as a
         | side-channel is attacker-capability dependent.
        
           | londons_explore wrote:
           | A good middle ground is to say "no secret data should be
           | leaked via timing of operations".
           | 
           | Ie. a high security and a low security thread on the same CPU
           | should not be able to get clues about what _data_ the other
           | has in its address space.
           | 
           | Offering stricter protection than that is pretty hard -
           | simply the fact that one thread is using the floating point
           | units a lot and causing the CPU to throttle is an info leak,
           | so I don't think it's possible to really prevent small leaks
           | of flow control information.
        
             | GregarianChild wrote:
             | This is a good middle ground, but ...
             | 
             | ... leakage by timing side-channels depends in parts on how
             | accurate your time-measurements are (e.g. Javascript's
             | timer resolution was degraded, in order to make transient
             | failure attacks like Spectre harder [1]).
             | 
             | I totally agree with your second point and believe, but
             | cannot prove, that _no_ current processor with any
             | competitive performance is free from timing side-channels,
             | the best we can currently do is put upper bounds on leakage
             | _rate_. There are just so many other timing side channels,
             | e.g. port contention [2]. They just keep popping up ...
             | 
             | Another dimension is the very meaning of _thread_.
             | Presumably, as an end-user, you care about the threads
             | /processes that the operating systems defines. But they
             | don't map one-to-one to hardware threads, cores etc. Indeed
             | I would argue that processors don't have threads in the
             | sense that end-users care about. So the relevant security
             | property must be regarding a hardware/software interface.
             | Quite how to nail down this isolation property is active
             | research I think. See e.g. [3] for work from 2016 in this
             | direction.
             | 
             | Yet another dimension to this is through passwords and
             | similar mechanisms: presumably you want to allow doing
             | things like "sudo" so a low-priority thread can increase
             | priority, provided the former knows the right password. But
             | the very act of supplying a _false_ password, leaks a tiny
             | bit of information (that can be quantified in terms of
             | Shannon-style information theory) about the password 's
             | search space.
             | 
             | [1] https://hackaday.com/2018/01/06/lowering-javascript-
             | timer-re...
             | 
             | [2] A. Bhattacharyya, A. Sandulescu, M. Neugschwandtner, A.
             | Sorniotti, B. Falsafi, M. Payer, A. Kurmus,
             | _SMoTherSpectre: Exploiting Speculative Execution through
             | Port Contention._ https://arxiv.org/abs/1903.01843
             | 
             | [3] D. Costanzo, Z. Shao, R. Gu, _End-to-end verification
             | of information flow security for C and assembly programs._
             | https://6826.csail.mit.edu/2019/papers/certikos-sec.pdf
        
               | londons_explore wrote:
               | I'd like to see CPU's having a notion of "theoretical
               | time" and "real time". Theoretical time ticks once per
               | CPU instruction or similar. Real time is kept in sync
               | with theoretical time by slowing the CPU down if
               | theoretical time gets ahead, and throwing an exception if
               | real time gets ahead.
               | 
               | All IO with untrusted devices would be delayed until the
               | real time exceeds the theoretical time the message was
               | sent.
               | 
               | Then one can have as many timing sidechannels as one
               | likes, and the running program can never learn about
               | them.
        
               | GregarianChild wrote:
               | That would be great. But isn't there a danger, that such
               | a CPU, at least if implemented naievely, would basically
               | mostly idle, since modern OoO processors schedule memory
               | access dynamically? For hard real-time tasks, some CPUs
               | don't have caches, to make timing more predictable, but
               | there is a heavy performance penalty to be paid for that
               | predictability.
               | 
               | Are you familiar with works like [1]? That is thinking in
               | this direction, but from a different angle.
               | 
               | [1] G. Heiser, G. Klein, T. Murray, Can We Prove Time
               | Protection?. https://arxiv.org/pdf/1901.08338.pdf
        
               | a1369209993 wrote:
               | > ... leakage by timing side-channels depends in parts on
               | how accurate your time-measurements are
               | 
               | The _rate_ of leakage from a existing timing side-channel
               | depends on how accurate your time-measurements are; the
               | _presence_ of such a side channel does not. (Though one
               | shouldn 't discount the value of degrading a side channel
               | from kilobytes per second to millibits per hour, even the
               | latter will only protect a reasonably-sized private key
               | for a decade or two.)
        
               | fuklief wrote:
               | > ... leakage by timing side-channels depends in parts on
               | how accurate your time-measurements are (e.g.
               | Javascript's timer resolution was degraded, in order to
               | make transient failure attacks like Spectre harder [1]).
               | 
               | But that doesn't matter if how long it takes for your
               | instructions to execute is data independent, no ?
        
               | GregarianChild wrote:
               | How are you planning to ensure that all victim code has
               | data-independent timing?
        
             | a1369209993 wrote:
             | I agree that that's good level of security to ask for, but
             | it's a _vastly_ less _vague_ level of security than
             | "verified for side-channel attacks".
        
             | jpaul23 wrote:
             | I think most of practical timing attacks are on software
             | pieces rather than hardware, if two threads interact using
             | some way (pipes, IPC, TCP) you can't blame the CPU for the
             | inter-threads leaks, looks for example how a simple RSA
             | timing attack works, it depends on software implementation.
             | Of course you can use branch-less code to avoid timing leak
             | at some point, but some leaks depend on the algorithm.
        
               | GregarianChild wrote:
               | Yes, so you want to say something like: the CPU doesn't
               | allow _additional_ timing attacks that already exist in
               | software. Making such a relational property formal in a
               | way that is usable in practise is a hard problem.
        
         | mhh__ wrote:
         | I'm not sure how exactly you'd do it.
         | 
         | I guess you could write some kind of AI that writes gadgets,
         | then tries to find the optimal probability of correctly leaked
         | data.
         | 
         | There are methodologies for doing this automatically published
         | but it's in the "floats rather than books" category of
         | behaviour since it's quite chaotic, so a formal proof would be
         | hard.
        
         | 0xTJ wrote:
         | I don't see why it would. Verified doesn't mean unhackable
         | (including physically), it means correct.
        
       | random_savv wrote:
       | So if I wanted to take this and start selling processors, how far
       | am I? What comes next?
        
         | seanmclo wrote:
         | Decide what process node this is aiming for, circuit layout,
         | synthesis, working with a nanofab to actually build the thing,
         | and assuming that there are bugs in there still, multiple
         | rounds of post silicon debug. It's a long and expensive road
         | ahead if you want this as a real product.
        
       | fuklief wrote:
       | So what kind of formal verification is it ? Is it proof
       | assistant, model checking ? And what does it verify ? It's not
       | really clear from a first glance.
        
         | sanxiyn wrote:
         | This uses riscv-formal. The default way to use riscv-formal is
         | bounded model checking. It verifies all single instructions and
         | some consistency checks, e.g. register read matches last
         | logical register write in presence of reordering.
        
           | the_duke wrote:
           | Wouldn't many bugs only surface in multi-instruction and
           | status register interactions?
           | 
           | But those are also probably way harder to verify.
        
             | sanxiyn wrote:
             | Consistency checks do check multiple instructions. Status
             | registers are work in progress.
        
               | GregarianChild wrote:
               | That's interesting!
               | 
               | What kind of consistency checks (other than register
               | state being preserved by commands that do not write to
               | the register in question)? Is there some standard best
               | practise?
        
       ___________________________________________________________________
       (page generated 2021-03-04 23:01 UTC)