[HN Gopher] Chisel: A Modern Hardware Design Language
___________________________________________________________________
Chisel: A Modern Hardware Design Language
Author : nairboon
Score : 115 points
Date : 2023-12-27 12:18 UTC (10 hours ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| cmrx64 wrote:
| I much prefer SpinalHDL, having used both.
| le-mark wrote:
| This comment would have more value if the kind poster could
| provide some pros and cons.
| cmrx64 wrote:
| After having evaluated the current state of Chisel 3, it
| seems most of the hard technical reasons to use Spinal have
| evaporated. However, when I was experimenting with both of
| them, Spinal's interface for simulation / testing was easier
| to use, and there were more metaprogramming features
| available.
|
| It may just be a matter of taste now!
| ux01 wrote:
| Are you active in the SpinalHDL community? Reason for asking is
| this page:
|
| https://spinalhdl.github.io/SpinalDoc-RTD/master/SpinalHDL/F...
|
| has an example that says for VHDL you need to write three
| separate processes. FWIW, that's not true, you can write it as
| one:
|
| process (clk, rst, cond) begin if rising_edge(clk) then if cond
| = '1' then my_reg <= my_reg + 1; my_reg_with_rst <=
| my_reg_with_rst + 1; end if; end if; if rst = '1' then
| my_reg_with_rst <= x"0"; end if; my_signal <= cond; end
| process;
|
| Also note that isn't a VHDL thing but a synthesis thing. But I
| think you'll be hard pushed to find a synthesiser that won't
| synthesise the above code correctly. Quartus, Vivado, Diamond,
| Libero, and ISE will all synthesise it correctly. I expect most
| of those tools will also accept process (all) as well, instead
| of having to write out the sensitivity list explicitly.
| ux01 wrote:
| Ah ok, copying and pasting code into the comment box doesn't
| retain code formatting ...
| ur-whale wrote:
| Yeah, this is one of the most heinous feature of HN's
| comment box.
| tails4e wrote:
| Everytime I look at the examples, coming from a verilog
| background, it's strange to see the clock and reset are all
| implicit rather than explicit. The blinking led for example,
| while readable the link between the generated verilog with clock
| and reset is not clear. How are multi clock domains and Async
| CDCs handled? I've never used chisel so maybe this all is well
| managed, but not being explicit about the clock domain seems
| strange
| cmrx64 wrote:
| Their docs explain it. It's sorta like "implicit this".
| https://www.chisel-lang.org/docs/explanations/multi-clock
| tails4e wrote:
| Thanks, so a single clock and reset is implicit, but clock2
| (or more) is explicit.
| seldridge wrote:
| Yes. There's two common types of modules with different
| behavior here: `Module` and `RawModule`. The former has
| implicit clock and reset ports. The latter has no implicit
| clock and reset. All design can be done with the latter,
| just a bit more verbosely---all clock and reset ports need
| to be defined and anytime a construct that needs a clock or
| reset is used (e.g., a register), it would need to be
| wrapped in a `withClock`/`withReset`/`withClockAndReset`.
| chrsw wrote:
| There's potentially a steep learning for Chisel especially if
| you're not coming from a Computer Science or programming
| language theory background.
|
| It feels like Chisel is the result of functional programmers
| looking at the state of hardware design languages and being
| aghast at how archaic it is. Which is probably true but the
| value here will be helping experienced designers with no
| interest in learning CS theory build better chips faster. If
| Chisel can do that then it will win.
|
| In my opinion the best languages have an extremely low barrier
| to entry. For example, C is very simple. The JavaScript
| development environment and tons of examples are readily
| available to everyone with a web browser. Python is cross-
| platform, has libraries for almost anything and relatively
| painless interoperability.
| Brian_K_White wrote:
| Maybe it's like not labelling the GND net next to every gnd
| symbol in a schematic, or even omitting both vcc and gnd pins
| from ic symbols entirely, except where they are not all the
| same.
|
| Implicit hidden magic is bad, but then again a schematic full
| of individual traces instead of busses, and 400 vcc and gnd
| labels can actually clutter and hide the essense of the design
| more than convey it.
|
| Maybe it's like that. You can spell it all out if you want to,
| but if it's always the same and everyone knows it, then maybe
| it's just obfuscating clutter to show it if there isn't some
| reason to.
|
| (btw I have no opinion on the language itself. is OO a good fit
| for hardware? maybe but I personally don't like it for software
| so I go in skeptical)
| tmitchel2 wrote:
| I wish there was one in typescript, I just can't get on with
| python.
| woadwarrior01 wrote:
| Chisel is based on Scala, not Python.
| snitty wrote:
| nmigen is a python based HDL.
| progbits wrote:
| I figured this must exist and indeed:
| https://github.com/gateware-ts/gateware-ts
| progbits wrote:
| I've played with this and while I much prefer it to verilog (and
| even migen) it is still too implicit for me.
|
| Most of my time spent building VGA toy was wasted debugging wrong
| counter wrapping and similar. I would like to try out something
| where the register width and operation semantics (wrap, extend,
| saturate) have to be always explicit.
|
| Maybe it would turn out to be too annoying?
| irdc wrote:
| > I would like to try out something where the register width
| and operation semantics (wrap, extend, saturate) have to be
| always explicit.
|
| Try VHDL. It's Ada heritage gives it that explicit feel.
| OhMeadhbh wrote:
| I come from a VHDL background and I think this is why Chisel
| always looked weird to me.
| programjames wrote:
| One thing I really like about Verilog is explicit register
| widths. I want to be able to work at the individual bit level,
| something that Python (and even C) are not very good at. Is
| Chisel decent for efficiency?
| glitchc wrote:
| C is excellent at bit-banging, miles better than Python.
| retrac wrote:
| C is able to express efficient code for bit-banging. But it's
| still ugly and hard to use.
|
| Ada handles it better than any language I've seen. Types are
| abstract but can be given concrete realizations. A number can
| be defined as, for example, the range of 0 to 31. And
| optionally the compiler could be told to store that in bits
| 20 to 25 of a 32-bit word. Or non-consecutively, or out of
| order, or across multiple words. Just about any kind of bit
| reordering and packing is supported. With overflow checking
| and so on, too.
|
| Take away idea is you should never be using shift and and/or
| bitwise operations. The compiler should translate (and also
| check that it fits and doesn't overlap) into the low level
| binary representation behind the scenes for you.
| rowanG077 wrote:
| Depends on what you mean by excellent. It sucks by definition
| since it doesn't have generics. It's efficient though which
| in some sense makes it excellent.
| IshKebab wrote:
| Actually for that Python is better than C due to its support
| for arbitrary precision integers. One of the few things
| Python got really right.
|
| It's way slower of course.
| gchadwick wrote:
| I've said as much before but I find the issue with alternative
| HDLs Vs SystemVerilog is they concentrate on fixing annoying and
| frustrating things but don't address the really hard issues in
| hardware design and can actually make them harder.
|
| For example SystemVerilog has no real typing which sucks, so a
| typical thing to do is to build a massively improved type system
| for a new HDL. However in my experience good use of verilog style
| guides and decent linting tools solves most of the problem. You
| do still get bugs caused by missed typing issues but they're
| usually quickly caught by simple tests. It's certainly _annoying_
| to have to deal with all of this but fundamentally if it 's all
| made easier it's not significantly improving your development
| time or final design quality.
|
| Another typical improvement you'll find in an alternative HDL is
| vastly improved parameterization and generics. Again this is
| great to have but mostly makes tedious and annoying tasks simpler
| but doesn't produce major impact. The reason for this is writing
| good HDL that works across a huge parameterisation space is very
| hard. You have to verify every part of the parameter space you're
| using and you need to ensure you get good power/performance/area
| results out of it too. To do this can require very different
| micro architectural decisions (e.g. single, dual and triple issue
| CPUs will all need to be built differently improved
| parameterization doesn't save you from this). Ultimately you
| often only want to use a small portion of the parameter space
| anyway so just doing it in system verilog possibly with some auto
| generated code using python works well enough even if it's
| tedious.
|
| So if the practical benefits turn out to be minor why not take
| all the nice quality of life improvements anyway? There's a large
| impact on the hard things. From a strictly design perspective
| these are things like clock domain crossing, power, area and
| frequency optimization. Here you generally need a good
| understanding of what the actual circuit is doing and to be able
| to connect tool output (e.g. the gates your synthesis tool has
| produced) and your HDL. Here the typical flow of HDL ->
| SystemVerilog -> tool output can become a big problem. The HDL to
| SystemVerilog step can produce very hard to read code that's hard
| to connect to your input HDL. This adds a new and tricky mental
| step when you're working with the design, first understand the
| circuit issue then map that to the hard to read SystemVerilog
| then map that to your HDL and work out what you need to change.
|
| Outside of design alone a major cost of building silicon is
| verification. Alternative HDLs generally don't address this at
| all and again can make it harder. Either you entirely simulate
| the HDL itself which can be fine but then you're banking on
| minimal bugs in that simulator and there's no bugs in the HDL ->
| SystemVerilog step. Alternatively you simulate the SystemVerilog
| directly with an existing simulator but then you've got the HDL
| to SystemVerilog mapping problem all over again.
|
| I think my ideal HDL at this point is a stripped down
| SystemVerilog with a good type system, better generative
| capability that crucially produces plain system verilog that's
| human readable (maintaining comments, signal and module names and
| module hierarchy as much as possible).
| bakul wrote:
| Have you looked at/used Bluespec SystemVerilog? If so, any
| comments based on your experience?
|
| https://github.com/B-Lang-org/bsc
| gchadwick wrote:
| Yes, I actually built two CPUs in it (one a derivative of the
| other) for my PhD over a decade ago. That experience helped
| shape my view on new HDLs.
|
| As a specific example Bluespec has this system of rules that
| define hardware behaviour. From a high level it's a very nice
| system describing the behaviour you want and the constraints
| and the compiler works out the details. In practice you have
| to think about the details and you've got to work out how the
| compiler will compose things. At least you do if you care
| about how many cycles something takes. I never did any
| frequency optimisations either which would also be harder as
| it'd be deeply coupled to the rule scheduling behaviour.
|
| Ultimately like many new HDLs it's a nice language from an
| abstract perspective but very much feels like someone with
| little practical experience with building real world silicon
| just applying software language design to hardware. The non
| existent type system of SystemVerilog being viewed as a major
| problem rather than what it is in reality an annoyance that
| causes more medium than any real substantial issues.
| bakul wrote:
| Thanks; just the kind of response I was hoping for! Is the
| problem an inability to express real world design
| constraints in a high level HDL?
| gchadwick wrote:
| I think the inherent problem is abstraction just doesn't
| work in the same way as it does in software. Various
| things keep pulling you down to the circuit level so if
| you're too far above it you're going to have a hard time
| as you have to reason through all those abstractions you
| built to avoid thinking about it.
|
| Closing timing (getting your design to pass timing
| analysis at a desired frequency) is a great example.
| Physical details like how many gates are on some path
| between flops, how far apart those flops are and how big
| the gates are (bigger gate, bigger drive, faster
| transitions) and what else is connected to them (more fan
| out more capacitance to drive, slower transitions) matter
| and in standard synchronous design this is pervasive
| across everything you do. Abstract too far from those
| details and closing timing becomes a nightmare.
|
| Imagine you wrote some standard data structure in the
| language of your choice. Now imagine the more call sites
| you have for the methods that manipulate it the slower it
| goes everywhere every single time you call it. Imagine
| some tiny edge case buried deep in the logic calling it
| occasionally could massively slow down accesses every
| time from everywhere. How would that change the way you
| build abstractions?
| mikeurbach wrote:
| Disclaimer: I work on Chisel and CIRCT, and these opinions are
| my own.
|
| These are good points, and I think Chisel is actually improving
| in these areas recently. Chisel is now built on top of the
| CIRCT[1] compiler infrastructure, which uses MLIR[2] and allows
| capturing much more information than just RTL in the
| intermediate representations of the design. This has several
| benefits.
|
| Regarding the problem of converting from HDL to System Verilog,
| and associating the tool outputs to your inputs: a ton of
| effort has gone into CIRCT to ensure its output is decently
| readable by humans _and_ has good PPA with popular backend
| tools. There is always room for improvement here, and new
| features are coming to Chisel in the form of intrinsics and new
| constructs to give designers fine grained control over the
| output.
|
| On top of this, a new debug[3] intermediate representation now
| exists in CIRCT, which associates constructs in your source HDL
| with the intermediate representation of the design as it is
| optimized and lowered to System Verilog. Think of it like a
| source map that allows you to jump back and forth between the
| final System Verilog and the source HDL. New tooling to aid in
| verification and other domains is being built on top of this.
|
| Besides this, the combination of Chisel and CIRCT offers a
| unique solution to a deeper problem than dealing with minor
| annoyances in System Verilog: capturing design intent beyond
| the RTL. New features have been added to Chisel to capture
| higher-level system descriptions, and new intermediate
| representations have been added to CIRCT to maintain this
| information and its association to the design. For example, you
| could add information about bus interfaces directly in Chisel,
| and have a single source of truth generate both the RTL and
| other collateral like IP-XACT. As the design evolves, the
| collateral stays up to date with the RTL. I gave a talk[4] at a
| CIRCT open design meeting that goes into more detail about
| what's possible here.
|
| [1] https://circt.llvm.org/
|
| [2] https://mlir.llvm.org/
|
| [3] https://circt.llvm.org/docs/Dialects/Debug/
|
| [4] https://sifive.zoom.us/rec/share/MhHtXPg_7iZk-
| QWw0A66CaBJDGs...
| erichocean wrote:
| Thank you for this write up.
| gchadwick wrote:
| Thanks for the info these all certainly sound like promising
| developments though I still think there's a major hurdles to
| overcome.
|
| > good PPA with popular backend tools
|
| Getting good PPA for any given thing you can express in the
| language is only part of the problem. The other aspect is how
| easy does the language make it to express the thing you need
| to get the best PPA (discussed in example below)?
|
| > Think of it like a source map that allows you to jump back
| and forth between the final System Verilog and the source
| HDL.
|
| This definitely sounds useful (I wish synthesis tools did
| something similar!) but again it's only part of the puzzle
| here. It's all very well to identify the part of the HDL that
| relates to some physical part of the circuit but how easy is
| it to go from that to working out how to manipulate the HDL
| such that you get the physical circuit you want?
|
| As a small illustrative example here's a commit for a timing
| fix I did recently: https://github.com/lowRISC/opentitan/comm
| it/1fc57d2c550f2027.... It's for a specialised CPU for
| asymmetric crypto. It has a call stack that's accessible via
| a register (actually a general stack but typical used for
| return addresses for function calls). The register file looks
| to see if you're accessing the stack register, in which case
| it redirects your access to an internal stack structure and
| when reading returns the top of the stack. If you're not
| accessing the stack it just reads directly from the register
| file as usual.
|
| The problem comes (as it often does in CPU design) in error
| handling. When an error occurs you want to stop the stack
| push/pop from happening (there's multiple error categories
| and one instruction could trigger several of them, see the
| documentation:
| https://opentitan.org/book/hw/ip/otbn/index.html for
| details). Whether you observed an error or not was factored
| into the are you doing a stack push or pop calculation and in
| turn factored into the mux that chose between data from the
| top of the stack and data from the register file. The error
| calculation is complex and comes later on in the cycle, so
| factoring it into the mux was not good as it made the
| register file data turn up too late. The solution, once the
| issue was identified, was simple, separate the logic deciding
| whether action itself should occur (effectively the flop
| enables for the logic making up the stack) from the logic
| calculating whether or not we had a stack or register access
| (which is based purely on the register index being accessed).
| The read mux then uses the stack or register access
| calculation without the 'action actually occurs' logic and
| the timing problem is fixed.
|
| To get to this fix you have two things to deal with, first
| taking the identified timing path and choosing a sensible
| point to target for optimization and second actually being
| able to do the optimization. Simply having a mapping saying
| this gate relates to this source line only gets you so far,
| especially if you've got abstractions in your language such
| that a single source line can generate complex structures.
| You need to be able to easily understand how all those source
| lines relate to one another to create the path to choose
| where to optimise something.
|
| Then there's the optimization itself, pretty trivial in this
| case as it was isolated to the register file which already
| had separate logic to determine whether we were actually
| going to take the action vs determine if we were accessing
| the stack register or a normal register. Because of
| SystemVerilog's lack of powerful abstractions making a tweak
| to get the read mux to use the earlier signal was easy to do
| but how does that work when you've got more powerful
| abstractions that deal with all the muxing for you in cases
| like this and the tool is producing the mux select signal for
| you. How about where the issue isn't isolated to a single
| module and spread around (e.g. see another fix I did https://
| github.com/lowRISC/opentitan/commit/f6913b422c0fb82d... which
| again boils down to separating the 'this action is happening'
| from the 'this action could happen' logic and using it
| appropriately in different places).
|
| I haven't spend much time looking at Chisel so it may be
| there's answers to this but if it gives you powerful
| abstractions you end up having to think harder to connect
| those abstractions to the physical circuit result. A tool
| telling you gate X was ultimately produced by source line Y
| is useful but doesn't give you everything you need.
|
| > the combination of Chisel and CIRCT offers a unique
| solution to a deeper problem than dealing with minor
| annoyances in System Verilog: capturing design intent beyond
| the RTL > you could add information about bus interfaces
| directly in Chisel, and have a single source of truth
| generate both the RTL and other collateral like IP-XACT.
|
| Your example here certainly sounds useful but to me at least
| falls into the bucket of annoying and tedious tasks that
| won't radically alter how you design nor the final quality
| and speed of development. Sure if you need to generate IP-
| XACT for literally thousands of variations of some piece of
| IP this kind of things is essential but practically you have
| far fewer variations you actually want to work with and the
| manual work required is annoying busy work that will generate
| some issues but you can deal with it. Then for the thousand
| of variations case the good old pile o' python doing auto-
| generation can work.
|
| Certainly having a solution based upon a well designed
| language with a sound type system sounds great and I'll
| happily have it but not if this means things like timing
| fixes and ECOs become a whole lot harder.
|
| Thanks for the link to the video I'll check it out.
|
| Maybe I should make one of my new year's resolution to
| finally get around to looking at Chisel and CIRCT more
| deeply! Could even have a crack at toy HDL in the form of the
| fixed SystemVerilog with a decent type system solution I
| proposed above using CIRCT as an IR...
| seldridge wrote:
| > Could even have a crack at toy HDL in the form of the
| fixed SystemVerilog with a decent type system solution I
| proposed above using CIRCT as an IR...
|
| This is the exact type of activity that CIRCT is trying to
| make easier! There are both enough core hardware dialects
| that new languages (generator-style embedded domain
| specific languages or actual languages) can be quickly
| built as well as the flexibility of MLIR to define _new_
| dialects that represent the constructs and type system of
| the language you are trying to build while still inter-
| operating with or lowering to existing dialects.
|
| This was the kind of thing that didn't work well with
| Chisel's FIRRTL IR as it was very closely coupled to Chisel
| and it's opinions. Now FIRRTL is just another CIRCT dialect
| and, even if you're not using Chisel and FIRRTL, you're
| benefitting from the shared development of the core
| hardware dialects and SystemVerilog emission that Chisel
| designs rely on.
| mikeurbach wrote:
| > To get to this fix you have two things to deal with,
| first taking the identified timing path and choosing a
| sensible point to target for optimization and second
| actually being able to do the optimization.
|
| > Because of SystemVerilog's lack of powerful abstractions
| making a tweak to get the read mux to use the earlier
| signal was easy to do but how does that work when you've
| got more powerful abstractions that deal with all the
| muxing for you in cases like this and the tool is producing
| the mux select signal for you.
|
| Thanks for the example and illustrating a real world
| change. In this specific case, Chisel provides several
| kinds of Mux primitives[1], which CIRCT tries to emit in
| the form you'd expect, and I think Chisel/CIRCT would admit
| a similarly simple solution.
|
| That said, there are other pain points here where Chisel's
| higher-level abstractions make it hard to get the gates you
| want, or make a simple change when you know how you want
| the gates to be different. A complaint we hear from users
| is the lack of a direct way to express complex logic in
| enable signals to flops. Definitely something we can
| improve, and the result will probably be new primitive
| constructs in Chisel that are lower-level and map more
| directly to the System Verilog backend tools expect. This
| is one example of what I was alluding to in my previous
| reply about new primitives in Chisel.
|
| > Your example here certainly sounds useful but to me at
| least falls into the bucket of annoying and tedious tasks
| that won't radically alter how you design nor the final
| quality and speed of development.
|
| I guess it depends on your goals. I spoke[2] about CIRCT
| and the new features in this realm at Latch-Up 2023, and
| after the talk people from different companies seemed very
| excited about this. For example, someone from a large
| semiconductor company was complaining about how brittle it
| is to maintain all their physical constraints when RTL
| changes.
|
| > Maybe I should make one of my new year's resolution to
| finally get around to looking at Chisel and CIRCT more
| deeply!
|
| We'd love to hear any feedback!
|
| > Could even have a crack at toy HDL in the form of the
| fixed SystemVerilog with a decent type system solution I
| proposed above using CIRCT as an IR...
|
| That's exactly what the CIRCT community is hoping to
| foster. If you're serious about diving in, I'd recommend
| swinging by a CIRCT open design meeting. The link is at the
| top of the CIRCT webpage. These can be very informal, and
| we love to hear from people interested in using CIRCT to
| push hardware description forward.
|
| [1] https://www.chisel-lang.org/docs/explanations/muxes-
| and-inpu...
|
| [2] https://www.youtube.com/watch?v=w_W0_Z3n9PA
| UncleOxidant wrote:
| > _For example SystemVerilog has no real typing which sucks, so
| a typical thing to do is to build a massively improved type
| system for a new HDL. However in my experience good use of
| verilog style guides and decent linting tools solves most of
| the problem_
|
| This is why I prefer VHDL. It's strongly typed. I worked at an
| EDA company on an HLS tool that generated HDL (Verilog, VHDL
| and SystemC). Customers would report linting problems with the
| generated Verilog. Since backend code generation was what I
| worked on I got to fix them. We had almost 0 problems with the
| generated VHDL mostly due to strong typing. But lots of issues
| with the generated Verilog that needed fixing.
|
| > _Another typical improvement you 'll find in an alternative
| HDL is vastly improved parameterization and generics_
|
| Another area where VHDL was already there. SystemVerilog makes
| some improvements over Verilog.
| gchadwick wrote:
| Yeah I should really get around to learning VHDL!
| ur-whale wrote:
| > This is why I prefer VHDL. It's strongly typed.
|
| I agree, and I must say I feel much "safer" when coding in
| VHDL than I do in Verilog, but OTOH, VHDL's strong typing
| sometimes cuts the other way.
|
| I have found myself way too many times at the receiving end
| of VHDL's type system furiously trying to prevent me from
| casting something of type A to something of type B when I
| _knew_ the underlying bit representation made the cast
| absolutely trivial.
| IshKebab wrote:
| > However in my experience good use of verilog style guides and
| decent linting tools solves most of the problem. You do still
| get bugs caused by missed typing issues but they're usually
| quickly caught by simple tests.
|
| Decent linting tools are really expensive. And even it verif
| does catch all these simple typing mistakes, they still cost a
| huge amount of time!
|
| I think the real issue with most of these "compile to Verilog"
| tools is that all the vendor tools work with SystemVerilog, and
| now you're debugging autogenerated code, which _sucks_.
|
| Another huge issue is formal verification. The tools only
| understand SVA so you basically have to be using SystemVerilog.
|
| > I think my ideal HDL at this point is a stripped down
| SystemVerilog with a good type system, better generative
| capability that crucially produces plain system verilog that's
| human readable (maintaining comments, signal and module names
| and module hierarchy as much as possible).
|
| I 100% agree here. There's a gazillion things you could fix in
| SystemVerilog and still have something that compiles to
| something similar enough that it's easy to debug. Kiiind of
| like Typescript for SystemVerilog. I wonder if anyone is
| working on that.
| donatj wrote:
| Not being a hardware person, when I heard "Hardware Design
| Language" I was thinking more along the lines of Snow White[1] -
| the idea of an open source industrial design language would be
| pretty interesting, something along the lines of Material UI but
| for hardware.
|
| 1. https://en.wikipedia.org/wiki/Snow_White_design_language
| bee_rider wrote:
| That's interesting! HDL's are older than dirt and the term is
| pretty well embedded in the computer engineering field at this
| point, but it is a funny near-collision.
| nkotov wrote:
| Are there other ones similar to this? For example, I really
| loved the 80s/90s Sony style.
| ur-whale wrote:
| The goal is worthy, the effort is commendable, but the underlying
| language (Scala) is an absolute turn-off AFAIC, and I suspect I'm
| far from being the only one.
| irdc wrote:
| I don't understand how this improves upon VHDL, even after
| reading their own explanation[0]. Just why they think object
| orientation makes hardware design easier isn't really explained.
| After a quick look at it I much prefer VHDL's entities (though
| their syntax is rather too wordy for my tastes), which at least
| make the direction of signals clearer. The problem with libraries
| could have been easily solved by extending/fixing VHDL instead of
| going through all this effort.
|
| 0. https://stackoverflow.com/questions/53007782/what-
| benefits-d...
| ux01 wrote:
| Doesn't VHDL have good support for libraries? VHDL packages and
| subfunctions can have generics (which I've not used) for
| library support similar to Ada. VHDL entities can also have
| generics (which I have used). I was wondering what was lacking
| and needs extending/fixing in VHDL.
| modulovalue wrote:
| There's a similar project at Intel: https://github.com/intel/rohd
|
| It uses Dart instead of Scala.
| physPop wrote:
| Anyone have experience that can compare this with Clash?
| aidenn0 wrote:
| It's been like 20 years since I did anything with an FPGA, but
| back then you basically had to use whatever tools your vendor
| provided you with. Have things improved to the point where an
| open-source HDL is usable with a large fraction of the FPGAs
| available?
| lambda wrote:
| Mostly you still have to use their tools, Chisel just compiles
| to Verilog and then you use their tools for the rest of the
| process.
|
| There are open source tools based on reverse engineering the
| bitstream for some smaller FPGAs like the Lattice family, some
| preliminary work on Xilinx, and one vendor actually supporting
| the open source toolchain (QuickLogic), but for anything
| serious on the major FPGA platforms, you still need to use the
| vendor toolchain.
| Brian_K_White wrote:
| Thanks for the heads up about QuickLogic. hdn't heard of them
| yet.
| BooneJS wrote:
| These languages are fun. "Look ma, no verilog!" But the
| underlying problem with all of these DSLs is the fact that the
| EDA[0] industry interoperates on verilog. Period. Worse, at some
| point in the design cycle, post-synthesized gate-level verilog
| becomes the codebase.
|
| No upstream verilog changes are allowed because it can be
| difficult to get a precise edit (e.g. 2-input NAND into a 2-input
| AOI by changing a verilog function) and you just don't have 3
| weeks of runtime to go from verilog to GDSII again. Or you want
| to make a metal-only respin that only changes one $0.xM mask
| layer and requires 8 weeks of fab time instead of changing
| multiple metal layers including the base and needs 16 weeks and a
| $xM payment.
|
| Programming language design is quite rich because they used to
| cross-compile to C, and now they generally generate LLVM IR. It
| doesn't matter what the bug is in the final binary; you're not
| going to hex edit the binary like you would with a single metal
| layer of a 300mm wafer. You're just going to recompile and it
| generally doesn't matter if one machine instruction changes or 1M
| do because unlike verilog, not even GHC needs 3w to compile a
| program.
|
| source: I've been on chip design teams for 2 decades and finally
| gave up on fighting verilog.
|
| [0]: Electronic Design Automation. Synopsys, Cadence, Siemens,
| Ansys, etc.
| phkahler wrote:
| Your stance seems well founded and very compelling. As an
| outsider to chip design ima play devils advocate and ask, what
| if the higher level tools reduce overall design time and
| eliminate a lot of those errors you end up patching in the
| metal layers?
|
| I fought the same battle with auto-generated C from simulink
| models, and really don't think that's the way to do much for
| production. But thats because the tool isn't good enough for
| general software development (can't write hello world for
| example) not because I worry about patching generated code.
| sweden wrote:
| The OP makes an interesting point but it doesn't point out
| the main problem with high level hardware languages: these
| kind of languages don't allow you to describe the hardware
| you want exactly, they only allow you to describe their
| functionality and then they generate a hardware for said
| functionality. The problem is that you will end up with a
| hardware that is less optimized than if you were to design it
| in Verilog.
|
| I work at a very big semiconductor company and we did some
| trials with implementing the exact same hardware we had in
| Verilog but on an high level HDL and while development could
| be faster, we ended up with worse PPA (Power, Performance and
| Area). If you try to improve this PPA, you just end up
| bypassing the advantages of high level HDLs.
|
| On top of that, it raises a lot of questions on verification:
| are you going to do verification (testbenches) in the Chisel
| code or in the generated Verilog code from Chisel? If you do
| it in Chisel, how do you prove that Chisel didn't introduce
| bugs in the generated Verilog code (which is what you will
| end up shipping to the foundry for tape out after synthesis
| and place & route)? If you do it in the generated Verilog
| code, how do you trace the bugs back to the Chisel code?
|
| I do think that we need a new language but not for design.
| Verilog/System Verilog is fine for hardware design, we don't
| need to reinvent the wheel here. We will always end up in
| Verilog in our synthesis and quite frankly, we don't spend
| that much time writing Verilog for hardware design. Hardware
| design is 5 lines of code and that's it. The real cost of
| hardware development is the other side of the coin, which is
| hardware verification.
|
| If hardware design is 5 lines of code, hardware verification
| is 500 lines. Writing testbenches and developing hardware
| verification environments and flows is essentially normal
| programming and we are stuck in System Verilog for that,
| which is a very bad programming language. Using System
| Verilog as a programming language is so prone to unintended
| bugs in your testbenches and bad programming constructs.
|
| This is what we should try to improve, verification not
| design. We spend far too much time in hardware verification
| and a lot of that time is spent dealing with pitfalls from
| System Verilog as a programming language.
|
| I wish people would be investing more thinking here rather
| than trying to make hardware design friendlier for
| programmers.
| ninjha wrote:
| I think Chisel's main win is that it is _great_ from an open-
| source research perspective.
|
| Taking advantage of the functional nature of Chisel enables a
| set of generators called Chipyard [0] for things like cores,
| networking peripherals, neural network accelerators, etc. If
| you're focusing on exploring the design space of one particular
| accelerator and don't care too much about the rest of the chip,
| you can easily get a customized version of the RTL for the rest
| of your chip. Chisel handles connecting up all the components
| of the chip. All the research projects in the lab benefit from
| code changes to the generators.
|
| Chisel even enables undergraduate students (like me!) to tape
| out a chip on a modern-ish process node in just a semester,
| letting Chisel significantly reduce the RTL we have to write.
| Most of the remaining time is spent working on the physical
| design process.
|
| [0]: https://github.com/ucb-bar/chipyard
|
| [1]: https://classes.berkeley.edu/content/2023-Spring-
| ELENG-194-0...
| duped wrote:
| Chisel isn't some toy, it's actually used in industry. IIRC
| SiFive's portfolio is built on it.
| codedokode wrote:
| I tried to use Verilog for a DIY project and found no way to
| control Verilog model from Python. Why is it like this? Do people
| really write tests directly in this awful outdated language
| instead of using Python?
|
| I tried to use cocotb, but this is not what I want. It runs
| verilog interpreter and launches Python script from it, but I
| want the other way: I want to create a verilog model instance and
| access it in Python.
___________________________________________________________________
(page generated 2023-12-27 23:00 UTC)