[HN Gopher] Compiling code into silicon
___________________________________________________________________
Compiling code into silicon
Author : adapteva
Score : 111 points
Date : 2021-12-07 13:50 UTC (9 hours ago)
(HTM) web link (www.siliconcompiler.com)
(TXT) w3m dump (www.siliconcompiler.com)
| sydthrowaway wrote:
| How is this different to HLS
| mediocregopher wrote:
| > Process scaling is coming to an end and it is a social
| imperative that we find a new path to extend the Moore's Law
| exponential.
|
| ... is it? Are our human rights at risk here? This sentence feels
| weird.
| betwixthewires wrote:
| It does feel weird, in general I'm averse to terms like "social
| imperative", but what I like to think the author is getting at
| is that humanity can get vastly more utility out of computing
| if we move away from this abstraction layer between the
| operations we want to perform and the hardware we perform them
| on. General purpose hardware is great when you're working out
| what sorts of things you want to do with it, experimenting, but
| when you already know exactly what you want to do it is
| extremely inefficient, and just churning out a machine that
| does exactly what you want is much better for everybody.
| ClumsyPilot wrote:
| Indeed, I can think of a few areas of progress where social
| imperative is much greater, such as agriculture, healthcare and
| carbon free energy
| mediocregopher wrote:
| Can you explain _why_ it's so important to you? I literally
| don't get it, but I've also always assumed Moore's Law would
| run out eventually, so my worldview has that baked in.
| hinkley wrote:
| You can only solve the problems you understand, and human
| knowledge and civilization have expanded to the point where
| people are not interchangeable. I only know enough about the
| carbon and agriculture problems that I can explain them to
| other techies. I can't solve them. I can't even be sure my
| advice on which legislation needs to pass or be repealed is
| right. It's all a semi-educated guess.
|
| We are a giant parallel engine and insisting that we order
| all work crashes the capacity of the system to a rounding
| error. It's actively psychologically damaging because it
| gatekeeps people out of acknowledging that they can help be
| part of a solution instead of pretending to be silent
| witnesses to the effort of others.
| ClumsyPilot wrote:
| "insisting that we order all work crashes the capacity of
| the system to a rounding error"
|
| I am not arguing "no-one gets to party untill we solve
| world hunger", I am questioning the 'social imperative
| claim' - computer performance has improved what, 3-5x in
| the last 5 years? It's not clear to me how this has
| benefitted the average Joe in a measurable way, who is
| struggling to put food on the table.
|
| By contrast, if energy/food prices fell 5x, or became x
| more nutritious, the impact would be hard to miss. Even
| reduced prices of space rockets seem to have made more of a
| difference.
| azeirah wrote:
| I personally think it has to do with the belief that
| improvements in ALL important areas such as agriculture,
| energy, medicine etc are supported strongly by increased
| computational power.
|
| This is perhaps a belief, an ideology, an assumption or
| perhaps a theory. It doesn't matter much what you call
| it. The important part is to understand that computation
| today plays a similar role as to what mathematics did
| before we got computation.
|
| Interestingly enough, Alan Kay performed a talk a couple
| of years ago about this topic, questioning why despite
| sustained growth and investment into computing power in
| medicine, medicine discovery was actually _dropping_ as a
| result.
|
| I don't recall exactly if it was "medicine discovery",
| but it was something important related to that idea. So,
| that's an important topic to discuss if you hold the
| belief that computation will improve all areas of living
| in invisible ways by being some sort of mathematical
| infrastructure or whatever you want to call it.
| etaioinshrdlu wrote:
| I'm having trouble understanding what this actually does. I think
| it wraps existing open source ASIC tools?
| krater23 wrote:
| Thats exactly my problem too.
| spullara wrote:
| Seems similar to the Chisel compiler for RISC-V:
|
| https://github.com/chipsalliance/chisel3
| logane wrote:
| Chisel and this project are different; chisel is a programming
| language to design hardware while this is a build tool for
| hardware design.
|
| In software we have a nice compilation process that transforms
| code into machine code. However, to "compile" an ASIC you go
| from a hardware description language like verilog (which
| basically describes the design) and pass it through a
| complicated pipeline that composes a number of different tools.
| Right now, engineers use ad-hoc flows that use bash/TCL to glue
| together all the parts --- the project posted above is an
| attempt to cleanly specify + control the "compilation" process.
| gsmi wrote:
| I don't think it's either. I am pretty sure it is a
| competitor to OpenRoad.
|
| https://theopenroadproject.org
| adapteva wrote:
| No, it it is not a competetitor to openroad. It's more like
| a build system, like how make sits on top of llvm. Openroad
| is one of the tools under the hood.
| naikrovek wrote:
| why is this stuff almost always Python-based?
|
| Have we not learned? Python is a fine language, but it is
| dynamically type checked, and not compiled to binaries. There is
| so much evidence all around us that statically type-checked,
| compiled languages produce better, more performant code, and
| allow for application stability and longevity much more easily.
|
| I get that Python may be your favorite language, and that's fine,
| and that alone doesn't immediately qualify it as a good choice
| for anything, if you're being objective.
| didibus wrote:
| > I get that Python may be your favorite language, and that's
| fine, and that alone doesn't immediately qualify it as a good
| choice for anything, if you're being objective
|
| Actually I believe you are incorrect here. From my reads, I
| think the research shows that factors that depends on the
| programmer still contribute more heavily to code correctness
| and success than static vs dynamic typing.
|
| So I'd hypothesize that how much the programmer enjoys
| themselves when working and programming in a language on a
| given code base plays a significant role to the code quality,
| correctness and overall success.
| systemvoltage wrote:
| Python is just exposing the interface. It's like using Boto3
| and calling S3 python based. You should write a wrapper for
| your favorite language.
|
| Btw, what do you recommend instead of Python?
| IshKebab wrote:
| > It's like using Boto3 and calling S3 python based.
|
| Well, except this project _is_ just the Boto3 part. It 's
| like calling Boto3 Python-based. Under the hood it uses
| existing projects like Yosys and Verilator. The only bit's
| they've added are in Python as far as I can tell.
|
| > Btw, what do you recommend instead of Python?
|
| If you really want to retain the scripting aspect I would use
| Typescript via Deno. It's a gazillion times faster than
| Python, much better infrastructure (no
| venv/setuptools/dependencies.txt nonsense) and has a much
| more solid type system.
|
| If you don't care about that I would go with one of: Rust,
| Go, Kotlin, Dart or C#. All better options than Python.
| gsmi wrote:
| Do you prefer TCL?
|
| As far as I can tell this is python replacement for
| https://github.com/The-OpenROAD-Project/OpenLane/blob/master...
|
| The CAD tools it calls will be the bottleneck, not the wrapper
| script.
| https://docs.siliconcompiler.com/en/latest/reference_manual/...
| maccam912 wrote:
| Agree that the best tool for the job should be used. In this
| case if python is gluing together different technologies to get
| this working it might have been the best tool for the job.
| Generally I would rather have my code compiled (into silicon in
| this case) be fast, even if it means the compiler itself is
| slower.
| jawnv6 wrote:
| This seems like a pretty egregious misreading of the project?
|
| Python isn't being lowered into silicon. It's a glue language
| for Boring Old Verilog that's been "compiled" into silicon
| since 1984.
|
| chip.set('source', 'heartbeat.v')
| IshKebab wrote:
| It seems you're right but I also misread it. I think it's
| badly written. Does this not sound like high level synthesis?
|
| > Process scaling is coming to an end and it is a social
| imperative that we find a new path to extend the Moore's Law
| exponential. The most viable option is extreme silicon
| specialization, which will require fast automated translation
| from program to silicon. Compiling simple programs into
| silicon should be like using llvm or gcc
|
| > development should be like programming in Python
|
| It actually seems like it is just a Python EDA build system.
| Python is still a terrible choice though.
| jawnv6 wrote:
| I don't think the project is putting their best
| presentation forward, but the second image on the intro
| makes it pretty clear there's a .v/.sdc/.def file kicking
| around. Without any relevant tooling to develop, debug, or
| modify them, but it's clear the python is just shuffling
| deck chairs.
| adapteva wrote:
| Expressing a complex project succinctly enough that all of
| HN will get it within 5sec is an np hard problem...here are
| the full docs. https://docs.siliconcompiler.com/en/latest/
| IshKebab wrote:
| How about "SiliconCompiler is a silicon build tool
| (unfortunately) written in Python. It allows you to drive
| RTL synthesisers and simulators and their associated
| tools (e.g. layout visualisation) using a declarative
| project configuration file, also (unfortunately) written
| in Python. Examples of the tools it can drive include
| Verilator, Yosys, Vivado and Spice. Its function roughly
| similar to that of CMake for C++."
|
| Assuming that's roughly accurate I think it is way
| clearer. You can remove the (unfortunately)s if you like.
| :)
| adapteva wrote:
| Authore here. Python is for the build system. Plug in any front
| end you want. If you want to turn it into silicon, you will
| need to go through commercial tools which interface through
| proprietary tcl. How would you solve that problem? We solved it
| using python.
| hinkley wrote:
| I think the reason we argue about this constantly is that we
| treat this like a 1 dimensional problem when it's the classical
| 2 dimensional problem where 3 quadrants are tenable and people
| only agree that there is a bad quadrant, an ideal quadrant, and
| then we argue endlessly about the remaining two.
|
| The truth is probably that the value of verifying correctness
| at compile time is proportional to how often the code will be
| run. There is space for 'glue' code that is in a simpler
| language, as long as we relentlessly search for code that
| deserves to be promoted to a more robust representation.
|
| Take game engines for example, where story-driven elements are
| driven by a scripting language, often Lua, for decades now. The
| real engine is in a language the narrators probably can't
| understand, which affords the engine designers the right to
| change their minds about how it actually works.
|
| I think we expect firmware and silicon to be run millions of
| times more than any particular piece of code. So the question
| is "Why Python for _this_ scenario? " and I'd like to know the
| answer too.
| IshKebab wrote:
| I agree, Python is an absolutely terrible choice here (MyHDL
| made the same mistake). Still it could be worse. Most of the
| EDA world is still heavily built on TCL!
| dnautics wrote:
| It doesn't have to be. I once made a julia->verilog transpiler
| that even recompiled your julia functions with verilator, so
| you could verify that the code was correct.
|
| https://github.com/interplanetary-robot/Verilog.jl
|
| Of course, gaining traction on something like this is tricky.
|
| I actually think Erlang/BEAM would be a great choice for making
| EDA tools, because it has concurrent execution model that you
| could probably very easily make play nice in rudimentary
| simulations of circuits that have triggers (`always @` sort of
| stuff.
| AshamedCaptain wrote:
| You are missing the point. This is the language for the REPL,
| not "compiled to binaries". Python may or may be not be an
| improvement over Tcl, but that's what you're competing with
| here.
| z3t4 wrote:
| > why is this stuff almost always Python-based?
|
| Because schools teach Python...
|
| > but it is dynamically type checked,
|
| There is a difference between type safety and dynamic vs static
| type checking. Static checking is actually more difficult to
| generate safe code.
|
| > compiled languages produce better, more performant code,
|
| It depends on the compiler/VM. For example taking a VM snapshot
| and saving it as a binary actually makes the program slower as
| it can not take advantage of runtime optimizations.
|
| > and allow for application stability and longevity much more
| easily.
|
| You could statically link an app and it would live through
| breaking changes in libs. But if there is an architecture or
| kernel breaking change your binary wont work at all - meanwhile
| a VM based language would work if it has support for the new
| architecture.
|
| > Python may be your favorite language, and that's fine, and
| that alone doesn't immediately qualify it as a good choice for
| anything,
|
| The best language is the language you know best. You will have
| to be a specialist programmer to know many languages so well
| that you could choose the best language for the job. And most
| languages are general purpose languages.
|
| Arguments aside. My interpretation of the article is that the
| author thinks it should be as easy as python, not that Python
| itself should be printed into hardware.
| ChrisMarshallNY wrote:
| Hmm... I thought that this was a treatment of the classic
| "silicon compiler"[0]. It basically is, with Python, as opposed
| to the usual; which is generally subtypes of C or C++.
|
| That said, this is definitely a great way to handle some types of
| product development.
|
| It's just that, once it becomes hardware, different rules apply
| to pretty much everything else in the project.
|
| [0] https://en.m.wikipedia.org/wiki/Silicon_compiler
| czbond wrote:
| > It's just that, once it becomes hardware, different rules
| apply to pretty much everything else in the project.
|
| YES! As a mostly software person, I worked on a hardware
| project a few years back. It was like being kicked into the
| distant past. I had to think about code quality, updates,
| support MUCH more than I'd ever needed to consider it in the
| past.
|
| As the movie trailer says "IN a world...." where code is
| practically set in stone, and changes can be infrequent, and
| require hardware upgrade cycles.... yuck.
| opencl wrote:
| As far as I can tell this project is _not_ about compiling
| Python into hardware. It 's a build tool for Verilog written in
| Python.
| Symmetry wrote:
| I hear Bluespec is almost like compiling Haskell into
| hardware but I've never worked with it myself.
| pezzana wrote:
| > ... Compiling simple programs into silicon should be like using
| llvm or gcc: fast, automated, and accessible.
|
| Some benchmarks would have been helpful. What kind of performance
| gain are there to make?
| jawnv6 wrote:
| The execution time is going to seem really zippy after the 8
| week wait for the chips to be fabbed.
| nojs wrote:
| I'd also like to know this. In general, what performance gains
| would one expect to see from an IC custom made for a very
| specific algorithm versus a modern CPU?
| OneTimePetes wrote:
| Is this for ASICs or permanent SOC type solutions?
| extheat wrote:
| So the purpose of this is basically to design an ASIC in code?
| I'm skeptical about "extreme silicon specialization" being the
| "most viable" replacement to Moore's law. Has that not already
| been the case since forever? The compute intensive things has
| already been moved onto specialized chips ala GPUs, TPUs, etc.
| ancharm wrote:
| Think of this product as infrastructure-as-code for the
| Silicon/ASIC workflow. It's a pythonic API for the ASIC/VLSI
| workflow as opposed to the traditional spaghetti script TCL
| nightmare that holds together modern ASIC flows.
| whatsakandr wrote:
| While I only really know what and FPGA is, I wonder if we're
| going to see a good amount of die area dedicated to an FPGA in
| the future, allowing software to define a new hardware
| accelerator for different tasks.
| throwaway894345 wrote:
| When I've inquired about this in the past I was told that
| relatively few programs compiled to silicon would outperform the
| same program running on a generalized chip (e.g., x86). Is this
| not the case?
| Symmetry wrote:
| For a lot of algorithms like mining bitcoins or decoding a
| video directly in silicon you'll get a couple of orders of
| magnitude improvement in power efficiency. But these are fairly
| simple algorithms without any recursion, not much control flow
| divergence, and predictable memory access patterns.
| Banana699 wrote:
| Yes.
|
| "Hardware is just crystallized software"
|
| -Alan Kay
|
| [1] http://yosefk.com/blog/its-done-in-hardware-so-its-
| cheap.htm...
|
| [2]
| https://www.reddit.com/r/Compilers/comments/e53pwa/compiler_...
| ldiracdelta wrote:
| Not a chance. Dedicated sillicon hardware for a given domain is
| vastly faster -- normally an order of magnitude or more. The
| real question is do you have $10 million dollars for a sillicon
| mask set to fab your first chip?
| throwaway894345 wrote:
| Does this factor in memory access and so on? Or is it scoped
| to tight numeric computing?
| vlovich123 wrote:
| It's not scoped to anything. Think of it like assembly. If
| you understand your problem domain at a sufficiently deep
| level, you can design a more optimal architecture.
|
| For example, Google TPUs don't just have fast-paths for
| numeric computing. They change the memory layout & chip
| communication architecture to optimize for ML workloads so
| that your chip is constantly fed with data and not
| bottlenecked on memory access or cross-chip communication
| (in addition to direct dedicated access to a large memory
| that stores your data set which requires synchronization
| with system RAM and cross-chip cache invalidation, there's
| dedicated on-chip memory that's usable as scratch/storing a
| part of the data set that doesn't need any of that).
|
| There are of course potentially problems that aren't made
| meaningfully faster by going to ASIC just as there are
| problems where going from ASM to C++ or C++ to JS don't
| benefit. However, typically any problems that are hitting
| some kind of performance bottleneck could benefit & then it
| just becomes a matter of cost vs reward.
| varjag wrote:
| Dedicated silicon is made for domains where such gains are
| possible: typically numeric, parallel, non-branching tasks
| that allow for multi-stage/pipeline processing. So this is an
| excellent example of survivorship bias.
| michaelyuan2012 wrote:
| is silicon also can making trailer like this?
|
| https://www.dreamtruegroup.com/diesel-tank-trailer/
| chriswarbo wrote:
| It depends on your definition of performance. ASICs can have
| much lower power requirements (which is why they're used for
| mining Bitcoin)
| AshamedCaptain wrote:
| It's hard to say from the webpage, but this looks actually like a
| (frontend for a) place-and-route tool. No idea why to reinvent
| the terminology.
| chriswarbo wrote:
| My favourite approach for this is "compiling to categories"
| http://conal.net/papers/compiling-to-categories
|
| Essentially: lambda calculus (and hence functional programming)
| can be given many different interpretations; whilst we usually
| interpret it as a computer program describing some result value,
| we can also interpret it as a circuit description, and turn it
| directly into hardware.
| ofou wrote:
| So much python here, but it gets the job done.
| lou1306 wrote:
| Personally I have fond memories of MyHDL [0], which may be seen
| as another "code-to-silicon" converter (or at least as the first
| step of a code-to-silicon workflow). I used it only briefly, and
| on a school project that had surprisingly little to do with
| actual hardware design [1], but it really felt "Pythonic" in the
| best possible way.
|
| [0]: https://www.myhdl.org/
|
| [1]: https://github.com/lou1306/gssi/tree/master/2pc
| nobodywasishere wrote:
| This reminds me very much of edalize[1], which does something
| very similar.
|
| [1]: https://github.com/olofk/edalize
| jimmyswimmy wrote:
| Pretty neat, a python tool that converts Verilog to an IC layout
| so that you can make your own custom SOC (assuming you have a
| substantial budget to pay for fab).
|
| Since it's not clearly stated on the front page, I had to go
| digging to figure out what processes it supports. Looks like
| FreePDK45, which is "an open-source generic process design kit
| (PDK) (i.e., does not correspond to any real process and cannot
| be fabricated)" [0], ASAP7 "Warning Work in progress (not ready
| for use)" [1] and Skywater130 which "As of May 2020, this
| repository is targeting the SKY130 process node. If the SKY130
| process node release is successful then in the future more
| advanced technology nodes may become available." [2] The
| floorplanner supports their ZeroSOC [3] which I guess is based on
| TitanSOC [4]
|
| If this sounds negative, it's not, I just couldn't figure out
| what processes this was intended for without digging. ASAP7 is
| Arm and NCSU, and Skywater130 is Skywater and Google.
|
| [0] https://github.com/mflowgen/freepdk-45nm [1]
| https://docs.siliconcompiler.com/en/latest/reference_manual/...
| [2] https://github.com/google/skywater-pdk [3]
| https://github.com/siliconcompiler/zerosoc [4]
| https://github.com/lowrisc/opentitan
| adapteva wrote:
| Thanks for reading the docs! The only open manufacturable PDK
| is skywater 130, I wish there were more. Flow supports other
| commercial PDKs but for obvious reasons we can't publish those.
| lloydatkinson wrote:
| I wonder why the homepage doesn't mention FPGA or ASIC once?
___________________________________________________________________
(page generated 2021-12-07 23:02 UTC)