[HN Gopher] How We Proved the Eth2 Deposit Contract Is Free of R...
       ___________________________________________________________________
        
       How We Proved the Eth2 Deposit Contract Is Free of Runtime Errors
        
       Author : michaelsbradley
       Score  : 170 points
       Date   : 2021-09-11 19:07 UTC (2 days ago)
        
 (HTM) web link (consensys.net)
 (TXT) w3m dump (consensys.net)
        
       | Taikonerd wrote:
       | You know, I'm not really into cryptocurrency, but it does seem
       | like it's contributing to a resurgence of interest in formal
       | methods. So - thank you, cryptocurrency community!
        
         | yjftsjthsd-h wrote:
         | It also seems associated with advances in wider adoption and
         | some progression in state of the art of functional package
         | managers and reproducible builds, which has been nice to see.
        
         | galangalalgol wrote:
         | I've worked on safety critical software, but we did not use
         | formal methods. Lots of test and checking for coverage (at asm
         | level). Oodles of static analysis and wading through false
         | positives. Do formal methods usually supplement those or
         | replace some of it? Can provers or formal methodologies scale
         | to safety critical codebases of 100s of ksloc? Whether such
         | codebases should exist, or be better partitioned is a separate
         | issue.
        
           | someguyorother wrote:
           | The current crowning achievement of formal methods is, as I
           | understand it, seL4. It is a formally proven microkernel of
           | about 8500 LOC. There's still a while to go until they can
           | scale to 100kLOCs, unfortunately.
        
             | nextaccountic wrote:
             | CompCert is also very impressive. It's not, however, free
             | software / open source (the source is available though)
             | 
             | https://www.absint.com/compcert/structure.htm
             | 
             | A problem with both seL4 and CompCert is that the code
             | written to express the proofs is huge, much larger than
             | code that actually does stuff. This puts a ceiling on the
             | size of the projects we can verify.
             | 
             | F* is a language that tries to address that, by finding
             | proofs with z3, a smt prover; z3 can't prove everything on
             | its own but it cuts down proof code by orders of magnitude.
             | They have written a verified cryptography stack and TLS
             | stack, and want to write a whole verified http stack.
             | 
             | https://www.fstar-lang.org/
             | 
             | https://github.com/project-everest/hacl-star
             | 
             | https://www.mitls.org/
             | 
             | https://project-everest.github.io/
             | 
             | F* (through Low _, a verified low-level subset of F_ ) can
             | extract verified code to C, which is kind of the inverse
             | than the seL4 proof: seL4 begins with C code and enriches
             | it with proofs of correctness; hacl* (a verified crypto F*
             | lib) begins with a proven correct F* code and extracts C
             | code (I gather the actual crypto primitives is compiled
             | directly to asm code because C has some problems with
             | constant time stuff). This enables hacl* to make bindings
             | to other languages that can just call C code, like this
             | Rust binding
             | 
             | https://github.com/franziskuskiefer/evercrypt-rust
             | 
             | Also this F* stuff is all free software / open source, so
             | it might become a very prevalent crypto and TLS stack
        
           | retrac wrote:
           | Not as a monolithic object. You might be able to do it with
           | an almost microkernel approach (whatever you're implementing)
           | with multiple small, well-defined objects with a well-defined
           | protocol between them. That can be very useful. But I doubt
           | you could formally verify their collective behaviour. The
           | space and time complexity of the algorithms is explosive as
           | the number of interacting objects increases.
        
             | maweki wrote:
             | Note that the specification (what you prove correctness of
             | the implementation against) already grows with the number
             | of interactions.
        
           | tombert wrote:
           | I've never really worked on anything safety-critical, but I
           | know that at least one real-time operating has been designed
           | and verified with TLA+ [1]. The book is a bit pricey (and
           | used copies don't pop up that often), so I haven't read it,
           | but my understanding is that the developers felt that TLA+
           | did find bugs.
           | 
           | That said, TLA+ is written separately from the code. There
           | could of course be bugs in the translation of TLA+ ->
           | C...TLA+ is about checking the design itself, not the code.
           | 
           | [1] https://www.springer.com/gp/book/9781441997357
        
           | noneeeed wrote:
           | Yes, they are normally used in conjunction with testing. But
           | they are also used in conjunction with careful systems
           | engineering to minimise the amount of code that needs to go
           | through the most rigerous processes and focus on what really
           | mattered.
           | 
           | I'm curious which static analysis tool you were using. I used
           | to work in SPARK Ada and false positives were not really a
           | thing due to how it worked. But I've heard of others that
           | were much less restrictive which ended up with lots of false
           | positives, or "maybe" cases that then wasted everyone's time.
           | I can see how systems like that would put people right off
           | using them.
        
           | notimrelakatos wrote:
           | Formal methods seems to be perfect for verifying protocols,
           | specially finding design flaws, here an example of an ongoing
           | effort to verify TLS
           | 
           | https://www.mitls.org/
           | 
           | https://github.com/project-everest/mitls-fstar
        
         | X6S1x6Okd1st wrote:
         | Another public good that cryptocurrency has been contributing
         | to is advances in zero knowledge proofs including some really
         | interesting stuff like SNARKs
        
           | miketery wrote:
           | Any good tutorials? I'm having a hard time understanding some
           | of the write ups I've found so far.
        
           | tromp wrote:
           | Here's the state of the art on that:
           | 
           | https://iacr.org/cryptodb/data/paper.php?pubkey=31249
        
             | X6S1x6Okd1st wrote:
             | One thing to highlight how important this may turn out to
             | be:
             | 
             | > Proof-carrying data (PCD) enables a set of parties to
             | carry out an indefinitely long distributed computation
             | where every step along the way is accompanied by a proof of
             | correctness. It generalizes _incrementally verifiable
             | computation_
             | 
             | This has the potential to remove the element of trust that
             | you currently _must_ place in the provider of computation
             | resources. With SNARKs you can simply ask them to include a
             | proof that they actually performed the computation that is
             | sublinear in the length of the proof and sublinear in the
             | amount of time taken to verify the proof.
             | 
             | Of course this will always be more computationally
             | expensive for the provider of the computation resources to
             | both do the work and provide the proof.
        
         | westurner wrote:
         | From "Discover and Prevent Linux Kernel Zero-Day Exploit Using
         | Formal Verification"
         | https://news.ycombinator.com/item?id=27442273 :
         | 
         | > _[Coq, VST, CompCert]_
         | 
         | > _Formal methods:https://en.wikipedia.org/wiki/Formal_methods
         | _
         | 
         | > _Formal specification:https://en.wikipedia.org/wiki/Formal_sp
         | ecification _
         | 
         | > _Implementation of formal
         | specification:https://en.wikipedia.org/wiki/Anti-
         | pattern#Software_engineer... _
         | 
         | > _Formal
         | verification:https://en.wikipedia.org/wiki/Formal_verification
         | _
         | 
         | > _From "Why Don't People Use Formal Methods?"
         | https://news.ycombinator.com/item?id=18965964 :_
         | 
         | >> _Which universities teach formal methods?_
         | 
         | >> _- q=formal+verificationhttps://www.class-
         | central.com/search?q=formal+verification _
         | 
         | >> _- q=formal+methodshttps://www.class-
         | central.com/search?q=formal+methods _
         | 
         | >> _Is formal verification a required course or curriculum
         | competency for any Computer Science or Software Engineering /
         | Computer Engineering degree programs?_
        
         | tombert wrote:
         | I agree; I've been trying to get employers/coworkers to look
         | into TLA+ or Isabelle (for different things) for years now, and
         | it feels like only after every cryptocurrency started blogging
         | did they start to listen.
         | 
         | I find it funny that the reason we want the cryptocurrencies to
         | be formally verified is because we want rock-solid financial
         | transactions so that this distributed story is always accurate
         | (which is good), but actual _banks_ don 't seem to have these
         | same requirements.
         | 
         | NOTE: Could be wrong on that last point, getting my information
         | about banks third-hand, it's possible that banks are using
         | formal verification. Can someone who works at a bank confirm?
        
           | lottin wrote:
           | This is because banks rely on property rights, which are
           | enforced by courts, whereas cryptocurrencies operate on a
           | "fait accompli" basis (i.e. whoever happens to control the
           | coins, owns them). This means, in the cryptocurrency world,
           | mistakes are irreversible. In conventional finance, this is
           | not usually the case.
        
             | robocat wrote:
             | Fiat accompli?
        
               | lottin wrote:
               | "Fait accompli"
               | 
               | https://www.merriam-
               | webster.com/dictionary/fait%20accompli
               | 
               | There's probably a better term but English isn't my first
               | language.
        
               | robocat wrote:
               | Your English is excellent - as a native speaker I
               | wouldn't have guessed you had a different mother tongue.
               | 'Fait accompli' is used in English with the same meaning.
               | I was trying to make a stupid joke referring to fiat
               | currency... HN is a tough audience, and I'm more of a
               | droll engineer type.
        
           | WanderPanda wrote:
           | Banks have the global exception handling system called courts
           | to fall back to, tough.
        
             | tombert wrote:
             | Yeah, which always felt like that would be a great reason
             | to formally verify...If a race condition ends up draining
             | my account of all my money, you can bet that I am going to
             | sue for the lost funds and probably a bunch of other
             | things, which I cannot imagine would be very cheap for the
             | bank. In fact, I would think it would be expensive for them
             | even if it _wasn 't_ their fault (e.g. maybe there was no
             | money in my account because I just have bad spending habits
             | and a bad memory).
             | 
             | It seems to me that having every bit of evidence on their
             | end to prove that they are guaranteeing our transactions
             | would be a really good thing....I suppose this solution
             | might just work itself out if banks actually start relying
             | on some kind blockchain to secure their transactions.
        
               | vineyardmike wrote:
               | > which I cannot imagine would be very cheap for the bank
               | 
               | Its cheaper for everyone for them to say oops and give
               | you the money back.
               | 
               | Its more expensive for you to sue than for them to be
               | sued.
        
         | serverholic wrote:
         | Despite all the drama around crypto, there is a lot of cool
         | stuff going on in that space.
        
       | klik99 wrote:
       | The formal proof stuff is very cool, but I'm getting some
       | unsinkable titanic vibes from how they present this.
        
         | dv_dt wrote:
         | The formal method involves using a programming language (a
         | mathematically based, verification oriented language) to verify
         | another module in some other programming language.
         | 
         | I'm sure it strengthens the codebase being verified. But there
         | is a reason systems engineering of involves both verification
         | and validation.
        
         | DennisP wrote:
         | Happily, the community didn't throw caution to the wind just
         | because somebody wrote a proof.
        
         | hibikir wrote:
         | Formal methods are good and all, but you can still run into
         | trouble when you fail to account for everything.
         | 
         | Some years ago, I had to work on a massively distributed system
         | that scaled up or down with minimal communication between
         | nodes, depending on the amount of work the system had. The
         | developer of that scaling algorithm, a mathematician that gives
         | quite a few software talks, and writes books teaching
         | functional programming, wrote a proof for the system,
         | guaranteeing its upper and lower bounds of performance. I was
         | brought in because the system wasn't acting anywhere near those
         | parameters: The worst part was that often it'd not downscale at
         | all. The mathematician pointed at his proof, and therefore at
         | the fact that if there was a problem, it must be a bug
         | somewhere else.
         | 
         | So after learning how to read his proof, executing it, and
         | doing all the typical ops stuff to identify what the system
         | believed was going on at any particular time, it became clear
         | that the proof had a significant flaw: It believed that the
         | amount of work that the system received was perfect
         | information, with no latency. Since the real world has latency,
         | and none of that was accounted for, the proof was guaranteeing
         | success in a universe that is different from ours. Once I
         | modified the model to represent the real information quality,
         | the outcomes were pretty close to what we were seeing in
         | reality, and equally unacceptable.
         | 
         | So proofs are powerful, and maybe there are zero issues in the
         | proof of this smart contract, but formal models and proofs can
         | still fail you, they'll just fail far less often than your
         | typical system made with unit tests and chewing gum.
        
           | albertgoeswoof wrote:
           | Chewing gum systems tend not to fail catastrophically though,
           | due to a general lack of trust in the solution. Formal
           | verification often instills a sense of confidence that means
           | few, if any, failsafes are put in place.
        
           | kerblang wrote:
           | Interesting story, esp. as an example of the difference
           | between math and science.
        
         | mabbo wrote:
         | Admittedly I'm being a bit trollish as I say this, but do you
         | get the same feeling from proofs that Pythagorean Theorem is
         | correct?
         | 
         | Proving things correct is much harder than finding counter-
         | examples that they aren't correct. But the methods _do_ work.
         | Their based on sound formal logic. Proofs can have mistakes,
         | certainly, but it 's a darn strong signal that their system is
         | correct.
        
           | tikhonj wrote:
           | The problem with proving things about software is less in the
           | proof itself and more in the connection between the proof and
           | the real world.
           | 
           | If somebody told me that they had a financial system whose
           | security was based on applying the Pythagorean Theorem to
           | physical triangles, it would raise exactly the same red flags
           | --the theorem itself isn't a question, but it doesn't even
           | try to capture how physical materials might be imperfect or
           | change over time, and those are exactly the sort of
           | inconsistencies a motivated attacker could exploit.
        
             | klik99 wrote:
             | I was going to reply but exactly this - I'll add that the
             | proof work in the article is really interesting and I love
             | to see practical work in formal proofs happening as a
             | result of eth in particular. However 100% proven code on
             | eth I just don't believe is 100% trustworthy.
             | 
             | Another way of phrasing it is that the law of leaky
             | abstractions means that though the code itself might be
             | 100% correct, lower levels can puncture the assumptions and
             | make that correctness moot - IE, unsinkable titanic vibes
        
           | [deleted]
        
       | jeffrallen wrote:
       | This was is great!
       | 
       | And also, as a long time observer of software correctness proof
       | fails, I'm getting the old popcorn popper ready for the first
       | instance where there's a bug and then we'll have explained to us
       | that, well actually, the prover wasn't covering that case...
        
         | whoknowswhat11 wrote:
         | EXACTLY this. It'll be like - oh we proved it using X, but
         | wrote code using Y and ooops, there is some difference so this
         | doesn't count as a break in the proof. If that's the case, you
         | have not in fact proved the contract, you've proved a different
         | contract or code base.
        
         | nlitened wrote:
         | No biggie, they will just make a new fork of modified ETH
         | blockchain where this error is fixed, and all the users will
         | follow.
        
           | 0des wrote:
           | A lot of folks might think this is you pushawing on ETH
           | because you invested differently, but this is quite literally
           | the established track record. For the uninitiated, this is
           | how we got Ethereum and Ethereum Classic. I'd be interested
           | in some of the downvoters making a sound rebuttal rather than
           | graying you into obsolescence but it's 2021. The filter
           | bubble expands thusly.
        
       | agomez314 wrote:
       | Nothing motivates people to do the hard work than money!
        
       | juskrey wrote:
       | But not from logical, especially forward compatibility errors?
        
       | goolulusaurs wrote:
       | Without knowing much about the topic, when I hear about code
       | being proven to be correct it makes me think of the Curry-Howard
       | correspondence, which states that proofs and programs are
       | isomorphic to each other. Is this related at all? If programs and
       | proofs are the same thing and you have a proof that a program is
       | correct, is that like having a proof that a proof is correct? In
       | which case it seems like you are getting into the domain of meta-
       | logic.
        
         | fooker wrote:
         | It is related, but not in the way you are imagining.
        
         | skybrian wrote:
         | No, not really. Under Curry-Howard, if you have a total
         | function that returns an integer, you've proven that an integer
         | exists. But we knew that already.
         | 
         | To prove non-trivial things, you need more sophisticated types
         | that make interesting assertions about their values, where it's
         | not immediately obvious how to construct a value of that type.
         | Special proof languages are used for this.
        
       | joelbondurant wrote:
       | Just store your pngs in MongoDB.
        
       | wilde wrote:
       | Can Etherium run contracts written in Dafny? This is cool, but it
       | doesn't guarantee that the Solidity matches the proof at all,
       | does it?
        
         | schelling42 wrote:
         | It is spelled Ethereum.
        
       | unixhero wrote:
       | This is what certik tries to solve. I just don't understand why
       | smart contracts need certik to be verified.
        
         | lima wrote:
         | > I just don't understand why smart contracts need certik to be
         | verified.
         | 
         | How would CertiK sell their token otherwise? :p
        
       | vmception wrote:
       | Eh staking Eth2 fits my risk profile and I dont care if another
       | TheDao incident occurs resulting in loss of my Eth
       | 
       | A state rollback would be extremely disappointing to me, even
       | though that much Eth in the hands of one party would undermine
       | the security of Eth2 (which is why the other state rollback
       | occurred)
       | 
       | This proof is impressive, but doesnt change anything for me
       | whether it is bulletproof or not
        
         | rfd4sgmk8u wrote:
         | You are ok with losing $100k of collateral? I just don't have
         | the fortitude to put that much on the line. I could... but I
         | don't think it is wise.
         | 
         | I still don't believe this migration is going to happen at all.
         | If you look at the PoS blocks from other EVM chains, you see
         | that reorgs happen way too often to be comfortable with $100B
         | of assets.
        
       | 19h wrote:
       | How does one prove the correctness of the correctness proofs?
        
         | AmericanChopper wrote:
         | You can't. The headline is missing a piece of context in that
         | the proved it was free of runtime errors... according to a
         | specification they chose to evaluate it against. What if their
         | specification contained an error? You could verify that
         | specification against another specification, you could even do
         | that formally if you want to. Eventually you have to confront
         | Godel's incompleteness theorem, and accept that such a level of
         | proof is not possible.
         | 
         | Formal verification btw is just a formalised methodology for
         | evaluating something against a specification. It doesn't prove
         | that errors don't exist, or that the verifier hasn't managed to
         | independently make their own error. It's just a rigorous PR.
        
         | junon wrote:
         | Generally using a language like OCaml, which is the language of
         | choice for most large formal proofing projects.
        
         | pharmakom wrote:
         | The idea is that we have more confidence in the model and model
         | checker than the actual code. At the very least, this is a form
         | of double entry.
        
         | jkhdigital wrote:
         | Gotta formally verify the source code of the proof checker...
         | then the entire bootstrapped tool chain that was needed to
         | compile it. No big deal
        
       | throwaway98797 wrote:
       | It's all good until it isn't.
        
       | k__ wrote:
       | Weren't there some efforts in a new, safer language for the EVM?
       | I think, Vyper or something.
       | 
       | What happened?
        
         | casi18 wrote:
         | Vyper has safe math built in, so you wont get accidental
         | over/underflows. but Solidity also has that now in the newer
         | versions. so its really a preference if you prefer the python-
         | like text. Fe is also being developed, looks like it has been
         | planned out a bit more than vyper.
         | 
         | But more likely we will see developments of security and proofs
         | in languages like cairo, to be used on zkrollups, where we will
         | also see alternative VMs to the EVM. So there will be lots of
         | options with different tradeoffs between
         | composability/security/efficiency/redundancy.
        
         | mypastself wrote:
         | I think Solidity won out due to to the ease of developer
         | adoption (unlike Vyper, it is object-oriented).
         | 
         | ETH2's EVM replacement, EWASM, will use WebAssembly, so
         | developers should be able to use saner programming languages
         | than Solidity.
        
           | k__ wrote:
           | Ah, I didn't know about EWASM.
           | 
           | This sounds pretty rad!
           | 
           | Thanks!
        
             | duncancmt wrote:
             | Unfortunately EWASM is pretty much dead. The current
             | direction of ETH2 R&D is still EVM-based.
        
               | mypastself wrote:
               | Huh, I guess my information is out-of-date. It is a fast-
               | moving space. And it's a shame they're continuing with
               | Solidity.
               | 
               | I'm wondering how Cardano's Haskell-based Plutus platform
               | will compare in practice, now that they're rolling out
               | smart contracts as well. I'm guessing they're going to
               | have significant adoption issues.
        
               | jwlake wrote:
               | Solidity isn't actually that bad language in practice;
               | it's a very simple language.
               | 
               | The EVM on the other hand is harder to justify. :)
        
               | jwlake wrote:
               | NEAR is WASM based. It's interesting but not nearly as
               | "battle tested" as EVM. ETH2 also can't break any
               | backward compatibility, which makes it easier to just
               | keep going down the EVM route.
        
               | udbhavs wrote:
               | EOS as well
        
               | k__ wrote:
               | That sounds pretty sad.
               | 
               | Why did they stop going in that direction?
        
               | vvpan wrote:
               | I think just a matter of priorities and also EVM has
               | accumulated so much tooling that eWASM would be a
               | setback.
        
               | duncancmt wrote:
               | I'm not a Ethereum core dev, but this is my
               | understanding:
               | 
               | The EVM is tailor-made to accurately account for the
               | expense of executing programs in a distributed
               | environment. It has a highly heterogeneous addressing
               | scheme. Every opcode has execution cost metering that has
               | been refined over time. Each word is 256 bits to make
               | 256-bit cryptographic operations easier. There's
               | significant tooling around the EVM for writing and
               | analyzing smart contracts. Other EVM-based blockchains
               | would need to migrate in parallel. The skillset of
               | "blockchain application developer" has coalesced around
               | Solidity and the EVM.
               | 
               | Bottom line is that although WASM offers compatibility
               | with non-blockchain tooling, the blockchain-specific
               | needs of Ethereum are so much better served by the EVM
               | that migrating is difficult.
        
               | jwlake wrote:
               | The EVM design choices are tightly bound with the on-
               | chain storage of data. On ethereum you pay for reads
               | (less) and writes (more) but everything is very expensive
               | because it's stored in a merkle tree with computationally
               | expensive operations.
               | 
               | Alot of the WASM experimentation has also included
               | changes around how storage is charged. Specifically NEAR
               | does a deposit system where you have to HODL to store
               | data on chain. That allows them to innovate in the
               | runtime and cost structure and still incentivize
               | blockchain nodes.
               | 
               | So it's not entirely gas accounting for computation, its
               | more pricing for storage that probably keeps eth on EVM
               | indefinitely.
               | 
               | Other experimental chains are must more likely to become
               | robust and trusted and more performant and then eclipse
               | ethereum.
        
               | duncancmt wrote:
               | Ethereum considered implementing storage rents, like
               | NEAR, but instead decided to transition to a verkle trie
               | model that permits succinct proofs of state values. State
               | rent presents some serious problems regarding
               | composeability. What the state-of-the-art solution for
               | state rent on resources that are effectively common
               | goods?
        
               | trianglesphere wrote:
               | I think there were also some issues around metering gas,
               | which tracks how much work was be done by uploaded code
        
               | AgentME wrote:
               | They're currently prioritizing moving to proof-of-stake
               | (which will end the need for ethereum mining and its high
               | electricity usage globally) and sharding (which will
               | solve the congestion that's causing very high transaction
               | fees). Using WASM would be nice in some ways but it
               | doesn't solve any high priority problems. Maybe it will
               | happen later once some attention is freed up by these
               | main problems being solved.
        
               | k__ wrote:
               | Sharding?
               | 
               | In what sense?
        
       | jkhdigital wrote:
       | Call me old fashioned, but I'm of the opinion that _every_ "smart
       | contract" should come with a machine-checkable proof of
       | correctness.
        
         | contravariant wrote:
         | Simple, just declare any and all consequences of the contract
         | to be binding and it is now correct by definition.
        
           | MontagFTB wrote:
           | In those cases there are no remedies for bugs. What recourse
           | does a victim have in the case of an exploit? How is such a
           | system supposed to attract users when there are no guarantees
           | that problems will ever resolve in their favor?
        
           | tgv wrote:
           | That's very well possible, and a desirable property of such
           | contracts. It does restrict the language in which contracts
           | must be written.
        
             | contravariant wrote:
             | Well it's not technically a restriction, though it'd be a
             | lot easier if you wrote contracts in a language that left
             | no room for doubt certainly.
             | 
             | In fact you can't really get away from the need for a clear
             | formal description of what the contract does anyway. If you
             | write it in a more flexible language but only accept it
             | with a proof of some formal properties then those formal
             | properties are in effect the contract.
             | 
             | At which point we reach something I still don't quite
             | understand. If people accept code satisfying those formal
             | properties as 'proven correct', then why aren't the
             | contracts written as those formal properties in the first
             | place?
        
               | tgv wrote:
               | It's a restriction, even if it turns out to be a 95%
               | subset of C, because without restrictions, the halting
               | problem would prevent effective verification.
               | 
               | > In fact you can't really get away from the need for a
               | clear formal description of what the contract does
               | anyway.
               | 
               | The code is its own description. The point is
               | verification: does the code implement the contract?
        
               | ImprobableTruth wrote:
               | >then why aren't the contracts written as those formal
               | properties in the first place?
               | 
               | How are you going to enforce the contract? If it's
               | formally verified, the contract is enforced by itself.
               | 
               | Not to mention, I think as a developer accepting a spec
               | that isn't verified is a tough pill to swallow, because
               | any bug no matter the size would mean that you're
               | breaking the contract.
        
           | swamp40 wrote:
           | Like the Pope.
        
         | PeterisP wrote:
         | The problem is with how you define and describe "correctness" -
         | a machine can check that code X matches definition Y, but there
         | can be bugs in the nuances of that formal definition Y, so you
         | effectively just have a different set of code in a different
         | language (that formal definition) that you need to audit.
        
           | logicchains wrote:
           | >a machine can check that code X matches definition Y
           | 
           | You shouldn't just check against a definition Y. Ideally
           | you'd have a set of properties that the contract must hold,
           | then prove that X satisfies those properties. It's generally
           | much easier to specify such properties than to write code
           | implementing them, compare for instance an "is_sorted"
           | function to an efficient "sort" function, so it's not just
           | the same code written in a different language.
        
             | cortesoft wrote:
             | > Ideally you'd have a set of properties that the contract
             | must hold, then prove that X satisfies those properties.
             | 
             | The issue is that how do you know you have the correct set
             | of properties that a contract must hold? You might think
             | you have all the properties that you need, but realize
             | later that you missed one that is critical for your desired
             | outcome.
        
             | dannyw wrote:
             | A "is_facebook" function may not be much easier. Smart
             | contracts like Compound, Uniswap are complex functions;
             | much more complicated than sorting which has an objectively
             | correct answer.
        
               | goldenkey wrote:
               | We can do a probabilistic check. Just see how much data
               | is being exfiltrated, if it's greater than a X% we can
               | figure it's either Facebook or the Chinese government.
               | ;-)
        
           | UncleMeat wrote:
           | Of course there can be flaws. WPA2 was famously proven
           | correct, but that didn't stop KRACK.
           | 
           | This still dramatically raises the bar. EVM has a variety of
           | very fun footguns and formal verification can help you dodge
           | lots of these.
        
         | casi18 wrote:
         | I think we will see more work like this, and tools developed to
         | help. but really it comes down to tradeoffs of speed and
         | security, what cant be insured, and the impact of bugs.
         | 
         | you might have an uninsured contract with multiple audits and
         | proofs of correctness, vs an insured contract with one quick
         | review. which one would you put money into?
        
           | nicoburns wrote:
           | Neither? I'd much rather use a traditional contract that can
           | fallback on the legal system if need be.
        
             | bbarnett wrote:
             | A handshake can be a binding contract. Words spoken can
             | bind you.
             | 
             | Thus, eth contracts can be enforced by civil courts too, if
             | there is some chicanery going on...
        
               | JackFr wrote:
               | If the meeting of the minds [1] is that the code is the
               | contract, then the civil courts aren't going to help.
               | 
               | For keeps, no takebacks, no do-overs...
               | 
               | [1] https://en.m.wikipedia.org/wiki/Meeting_of_the_minds
        
               | tomalpha wrote:
               | IANAL, but I understood Meeting Of Minds to refer to the
               | need for both parties to a contract to actually agree
               | that a contract exists.
               | 
               | From the wiki page you reference for example:
               | 
               | > The reasoning is that a party should not be held to a
               | contract that they were not even aware existed.
               | 
               | And later on, something directly relevant to this
               | discussion perhaps:
               | 
               | > Mutual assent is vitiated by actions such as fraud,
               | undue influence, duress, mutual mistake, or
               | misrepresentation.
               | 
               | I think that this discussion is more closely related to
               | the legal concept of a Mistake [0], which absolutely
               | _can_ be something that a court might address. Though
               | even that doesn 't seem like quite a perfect fit here.
               | 
               | [0] https://en.m.wikipedia.org/wiki/Mistake_(contract_law
               | )#Mutua...
        
               | dsr_ wrote:
               | ...as long as all the participants are traceable entities
               | with known legal jurisdictions.
        
               | jacquesm wrote:
               | Yes, but that would invalidate a very large chunk of the
               | possible usecases for eth contracts. After all, if you
               | end up needing the courts anyway you might as well use a
               | normal contract, and the whole advantage of eth contracts
               | - to me at least - appears to be the feature that the eth
               | contract is the entirety of the arrangement. So it can be
               | used between two parties in entirely different
               | jurisdictions without that creating an imbalance.
        
               | [deleted]
        
               | jameshart wrote:
               | And when the court orders that the parties do something
               | that the eth contract cannot permit (and has been
               | formally proven to prevent from ever being induced to
               | do), how does the eth contract handle that?
        
         | antupis wrote:
         | I think it is moving there but ecosystem is not there yet. Also
         | it does not help that Ethereum has chosen Solidity which is
         | bastard child of javascript.
        
           | m00dy wrote:
           | would you elaborate on solidity vs js ?
        
             | antupis wrote:
             | https://docs.soliditylang.org/en/v0.8.7/language-
             | influences.... there is still strong resemble between js
             | and solidity.
        
               | m00dy wrote:
               | I think it is only "function" keyword. I'm not sure how
               | strong resemble is.
        
           | tiborsaas wrote:
           | It's more similar to Java or TypeScript than JavaScript.
        
           | danielvf wrote:
           | Solidity has very very little in common with javascript other
           | than they were both quickly put together languages full of
           | weird spots, and both fairly successful.
           | 
           | Solidity is much closer to being C's mentally challenged
           | grandchild than is to javascript.
        
         | elif wrote:
         | First, start with an automatic proof that the contract will
         | halt...
         | 
         | Oh wait
        
           | WJW wrote:
           | You jest, but the halting problem is only unsolvable in the
           | general case. For "most" (but not all) programs it is
           | perfectly possible to prove whether or not they will halt.
        
           | TacticalCoder wrote:
           | The halting problem in Ethereum is "solved" (more like
           | "dealth with") by a very simple technique: every EVM
           | instruction has a cost and you have to provide "gas" to run
           | your smart contract. If you don't provide enough gas, your
           | contract call has zero effect and all the gas you paid is
           | lost.
           | 
           | A contract call that would misbehave is guaranteed to halt:
           | not by itself, but by the EVM, which, when the contract call
           | eventually runs out of gas, shall take care of halting it.
           | 
           | It's pretty nice I'd say.
        
             | mst wrote:
             | IIRC erlang does reduction based scheduling except with a
             | multiplier related to the number of messages in its mailbox
             | - would be interesting to read a comparison from somebody
             | who understands the runtimes better than I do.
        
         | anm89 wrote:
         | why be absolutist? is that necessary if the contract has $20
         | riding on it?
        
       | pidge wrote:
       | To clarify, they implemented the algorithm in Dafny, and then
       | proved that version correct. They did not verify code that will
       | actually run in production.
       | 
       | From the paper:
       | 
       | > Dafny is a practical option for the verification of mission-
       | critical smart contracts, and a possible avenue for adoption
       | could be to extend the Dafny code generator engine to support
       | Solidity ... or to automatically translate Solidity into Dafny.
       | We are currently evaluating these options
        
         | ilammy wrote:
         | "Beware of bugs in the above code; I have only proved it
         | correct, not tried it." -- D.E.K.
        
         | X6S1x6Okd1st wrote:
         | Additionally it's proving a translation of the algorithm
         | implemented in solidity. Solidity is not what is run on
         | Ethereum, EVM bytecode is. Solidity is compiled down to EVM
         | bytecode and that's what is run.
         | 
         | That seems like another point where a bug could creep in.
         | 
         | I wouldn't be surprised if there was a hard fork to save the
         | deposit contract if there was a critical bug discovered.
        
           | yissp wrote:
           | I mean if there was a critical bug in the solidity compiler
           | itself that would put the correctness of pretty much every
           | contract into question, right? It seems like a fork would be
           | hard to argue against in that case.
        
           | latchkey wrote:
           | There will be many hard forks before the funds from the
           | deposit contract can be retrieved.
        
             | X6S1x6Okd1st wrote:
             | Fair enough, non-contentious hard forks in Ethereum are
             | usually called upgrades at this point.
        
         | westurner wrote:
         | https://github.com/dafny-lang/dafny
         | 
         | Dafny Cheat Sheet:
         | https://docs.google.com/document/d/1kz5_yqzhrEyXII96eCF1YoHZ...
         | 
         | Looks like there's a Haskell-to-Dafny converter.
        
       ___________________________________________________________________
       (page generated 2021-09-13 23:01 UTC)