[HN Gopher] Jepsen: Capela dda5892
___________________________________________________________________
Jepsen: Capela dda5892
Author : aphyr
Score : 66 points
Date : 2025-08-07 14:44 UTC (8 hours ago)
(HTM) web link (jepsen.io)
(TXT) w3m dump (jepsen.io)
| pluto_modadic wrote:
| does Jepsen's test software auto-generate the cool diagrams (like
| 3.22) or do you have to do it yourself? do you prefer any
| software to do that?
| aphyr wrote:
| It does indeed! This is a part of https://github.com/jepsen-
| io/elle, which infers totally-connected components of the
| transaction dependency graph. :-)
| runningmike wrote:
| Reading the first line I thought it was about
| https://github.com/eclipse-capella/capella, the Foss solution for
| Model-Based Systems Engineering. Confusing. But now there is also
| a Capela with a single 'l' -) Great writeup Kyle, thank you!
| cess11 wrote:
| If it's partly a marketing move to get it jepsened before
| release, then it worked on me.
|
| "Like Smalltalk and other image-based languages, Capela persists
| program state directly, and allows programs to be modified over
| time. Indeed, Capela feels somewhat like an object-oriented
| database with stored procedures."
|
| This seems exciting.
| derekstavis wrote:
| Derek from Capela here. Marketing was not our primary purpose,
| but I guess it worked out as such ;)
|
| The primary reason for us engaging early on with Jepsen is that
| we care a lot about correctness, consistency and reliability,
| and we wanted the best in this field to establish a baseline of
| tests that we must make sure our platform passes before we even
| put it the hands of anybody.
| sitkack wrote:
| You should team up with Antithesis. https://antithesis.com/
| derekstavis wrote:
| Kyle connected us already - we definitely plan to leverage
| their product for extra layers of verification!
| aeontech wrote:
| Aside from obvious Smalltalk influence, this also brings to mind
| Darklang (that switched to an open-source model recently [1]).
|
| I wonder how this will pan out... very interesting to see new
| approaches being explored.
|
| [1]: https://news.ycombinator.com/item?id=44290653
| derekstavis wrote:
| Darklang is pretty fascinating, and was brought to our
| attention when we started demo-ing Capela to some folks in the
| industry. I think where Darklang (and others like Skip [1])
| falls short is that it is a new language. Capela instead
| leverages typed Python, an existing and pretty familiar
| language to most programmers (and LLMs).
|
| [1]: https://skiplabs.io
| aeontech wrote:
| Oh, I just realized I am guilty of the drive-by-free-
| association comment without actually saying anything about
| the subject of the post - sorry!
|
| Very cool to see a team use Jepsen for super early pre-
| release testing of the system.
|
| I wonder if you wish you had waited for the runtime to be a
| bit more stable, or you feel this was already well worth the
| effort, even with some of the identified failures being in
| "known incomplete" areas? (I could see either side of the
| argument - waiting longer might give you more valuable
| failures, but testing early gives you a chance to catch
| problems before they become baked into the foundation and
| become more difficult to fix...)
|
| Another tool that feels like sci-fi to me any time I hear a
| mention of it, is Antithesis [1] - written by the people who
| built FoundationDB. Could be another interesting integration
| to investigate in the future to help bulletproof the language
| runtime?
|
| [1]: https://antithesis.com
| aphyr wrote:
| Author here--from discussions with Capela's team, I think
| this sort of early testing can be remarkably helpful,
| because it offers a test suite that Capela's team can check
| their work against as they move forward.
|
| I would suggest against this kind of integration test when
| the data model or API are in constant flux, because then
| you have to re-write or even re-design the test as the API
| changes. Small changes--adding fields or features, changing
| HTTP paths or renaming fields--are generally easy to keep
| up with, but if there were, say, a redesign that removed
| core operations, or changed the fundamental semantics, it
| might require extensive changes to the test suite.
| derekstavis wrote:
| We thought a lot about this, and decided to not wait since
| we are a pretty small team and having more hands helping us
| to catch any problems early on would help us to make better
| technical decisions as we continue evolving the core
| platform. In addition to that, we gained a pretty robust CI
| step to keep us accountable around the guarantees that we
| want to provide. Reliably and consistently storing data is
| of utmost importance for us.
|
| The plan is to engage with Jepsen again once we have a
| system that passes the current suite, expand the test
| surface even further, and continue iterating until we are
| satisfied with the results. There won't be a public release
| before that is true.
|
| Working with Jepsen also sparked a couple other interesting
| ideas, like building a Python language fuzzer to ensure
| that many shapes of Python programs work as intended in
| Capela. That's something we would love to do in the future.
|
| Re: Antithesis - absolutely. Kyle mentioned them to us and
| we think it will be a very interesting product for us to
| adopt to further ensure we're delivering a reliable
| product.
___________________________________________________________________
(page generated 2025-08-07 23:01 UTC)