[HN Gopher] Things I would have told myself before building an a...
       ___________________________________________________________________
        
       Things I would have told myself before building an autorouter
        
       Author : seveibar
       Score  : 349 points
       Date   : 2025-03-28 00:38 UTC (22 hours ago)
        
 (HTM) web link (blog.autorouting.com)
 (TXT) w3m dump (blog.autorouting.com)
        
       | knodi123 wrote:
       | Man, look at all those keywords I remember from college. I wish I
       | got to use fancy well-known algorithms. Instead all I'm doing is
       | building UI components and REST APIs to present elasticsearch
       | results. All the fun stuff is buried in black boxes.
        
         | seveibar wrote:
         | Algorithms are a lot more fun now that LLMs have all the
         | geometric heuristics memorized. I think a lot of algorithms are
         | just inescapable in game development, so if you're itching to
         | make an algorithm making something like like a tower defense
         | game has a lot of classic algorithms involved!
        
           | Animats wrote:
           | Right. There was a time when I had a well-worn copy of Knuth,
           | vol. 1 at hand. No longer.
        
           | tlarkworthy wrote:
           | Llms do not have it memorized. Try asking it to calculate the
           | signed angle between two signed planes. I gave up and told it
           | to copy the unity implementation.
        
         | mschuster91 wrote:
         | The core problem is a severe mismatch between academic
         | curricula and what the job market actually needs on one side
         | and the fact that companies use "needs a college degree" as a
         | proxy to weed out risk and bypass ADA/anti discrimination laws.
         | Both are a massive drain on the economy.
         | 
         | IMHO, at the very least the current CS degree needs to be split
         | up - CS should be split into various smaller (and faster
         | achievable) subdegrees - the fancy math stuff should be its own
         | degree, possibly be fused with a new degree relating to AI,
         | database and network theory should be its own degree, and
         | frankly stuff like low-level assembler as well, and the "how do
         | electronic components, NAND gates, boolean logic and whatnot
         | work" moved off to electronics engineering. And what the market
         | actually needs the most - people who can crank out CRUD app
         | stuff - should be either its own degree if one insists that
         | this needs academic knowledge, or be moved off to something
         | like trades education.
         | 
         | In parallel, the job requirements gatekeeping should be tackled
         | by laws - companies must no longer be allowed to require
         | degrees that have little to no impact on the actual job duties.
         | It's forcing our children to waste years of their life and take
         | on five to six figures worth of debt, all only for companies to
         | be able to weed out people.
        
       | phendrenad2 wrote:
       | I wonder how autorouters encode all of the various electronics
       | rules. Such as min distance between traces, max angle of a turn,
       | how to snap multiple connections together so they look good, etc.
        
         | thrtythreeforty wrote:
         | As far as I know, the design rules are managed as net
         | properties, and the typical auto router opinion of aesthetics
         | is "lol." So a lot of effort that could be spent typing in
         | rules and cleaning up aesthetics is often spent just routing
         | the damn board manually.
        
       | Animats wrote:
       | That's a great discussion of autorouting. Then he ends with: _"
       | key piece to enable the "vibe-building" of electronics."_ Ouch
       | 
       | Routing itself is easy. It's when the router has to tear up stuff
       | it already routed to fit new stuff in that things get complicated
       | and the combinatorics start to get you.
       | 
       | I miss the autorouter KiCAD used to have. It was taken out for
       | iffy IP reasons (the author had worked for an autorouting
       | company). Reaction to users who wanted it back were along the
       | lines of "Real Men Don't Use Autorouters".[1]
       | 
       | [1] https://forum.kicad.info/t/autorouting-and-
       | autoplacement/185...
        
         | seveibar wrote:
         | Haha I feel like the right reaction to "vibe-*" is to cringe. I
         | cringe a little bit every time I see someone promoting a vibe-
         | coded app at the moment, but I think back to how I got started
         | in coding (I was constantly annoying people on old ActionScript
         | forums to fix my code) and I see so much potential in people
         | being able to get started quickly in any domain. I hope that
         | our autorouter (and others that follow!) will similarly allow
         | people to ship their first electronics without needing tons of
         | guidance or formal education.
         | 
         | That said a good autorouter should also be useful to
         | professionals! So hopefully we help with that as well!
        
         | bsder wrote:
         | I wish these folks well and hope that their autorouter gets
         | folded into KiCad.
         | 
         | However, as one of the cranky old people who don't really want
         | to see KiCad expend any energy on autorouters, PCB autorouters
         | are a pain in the ass that never work.
         | 
         | We can look at VLSI autorouters to determine why that is. VLSI
         | autorouters also were a pain in the ass that never worked. But
         | what happened was that VLSI suddenly got lots of layers and you
         | could dedicate a layer to routing vertically, a layer to
         | routing horizontally, a layer to routing power and still have a
         | couple of layers for global vertical interconnect, global
         | horizontal interconnect, and global power.
         | 
         | The fundamental problem with PCB autorouting is that PCBs have
         | _WAY_ more obstructions than VLSI chips do. First, components
         | themselves are obstructions and choke points. Second, PCB vias
         | almost always obstruct all the layers of the board while VLSI
         | vias only obstruct the two layers being connected. Third, PCB
         | vias tend to be bigger than your interconnect metal width.
         | Fourth, the number of PCB layers in use is _way_ smaller than
         | the number of layers in VLSI--the most common numbers of layers
         | are 4 layers (most projects--of which only 2 are really used
         | for general routing), 2 layers (because of cost engineering--
         | good luck autorouting these) and 6 (a tiny minority).
         | 
         | It all adds up to PCB autorouting being a significantly more
         | complicated job than VLSI autorouting.
        
           | seveibar wrote:
           | Author here. Lots of great points being made. I want to throw
           | in a crazy prediction.
           | 
           | I think routing is basically an image transformer problem
           | without a good dataset right now. If the eye of the giant AI
           | companies turned to autorouting and large synthetic but
           | physically simulated circuit datasets were created, we would
           | basically be done and autorouting would be a solved problem.
           | 
           | This means that all the work I'm doing now on heuristic
           | algorithms, as well as all the hard work done by humans, will
           | probably not be needed in the future. I just don't see
           | autorouting as being more difficult (in terms of solution
           | space size) than the art being produced by transformer models
           | right now.
           | 
           | I'm saying this because you're right, these heuristic
           | algorithms can only get us so far- the problem is really
           | difficult. But our human intuition, the magic black box
           | operation we do, doesn't seem to be too far beyond the
           | emerging transformer image models.
        
             | theamk wrote:
             | The major difference is in PCB, every single track has to
             | abide by the rules, no exceptions are allowed if you want
             | your board to work.
             | 
             | While AI-generated art is full of small defects which
             | people are just ignoring, who cares about non-natural
             | shadows or unrealistically large fingers.
             | 
             | It is possible to iteratively combine AI with DRC checker
             | and loop until all is good, but it's not obvious to me that
             | this would be performant enough, or if this system will
             | stay in some sort of local minimum forever once the circuit
             | is complex enough.
        
               | seveibar wrote:
               | The same way claude never outputs code that has a syntax
               | error, the image transformers will output DRC compliant
               | "images"!
               | 
               | I think spatial partitioning can help solve issues with
               | minor DRC violations as well- it should be easier to
               | correct an image than to generate one from scratch. But
               | I'm not even sure it'll be necessary because of how
               | coherent image models already are.
        
               | UncleEntity wrote:
               | You're suggesting the robots can learn the routing
               | algorithms and rules just by looking at a bunch of
               | pictures?
               | 
               | Sure, maybe, given a super-massive amount of data.
               | 
               | I see it as the difference between "I want a photo-
               | realistic portrait of a beagle" and "I want a photo-
               | realistic portrait of my neighbor's beagle, Bob". The
               | first one there's some general rules as to what makes
               | something a 'beagle' so is not too hard while the second
               | has specific constraints which can't be solved without a
               | bunch of pictures of Bob.
               | 
               | To address the specific issue, an AI would have to learn
               | the laws of physics (aka, "Bobness") from a bunch of
               | pictures of, essentially, beagles in order to undertake
               | the task at hand.
        
               | MrBuddyCasino wrote:
               | > would have to learn the laws of physics
               | 
               | I forgot the name unfortunately but there was a project
               | recently where they made AI + physically correct
               | modeling.
               | 
               | EDIT found it, the Genesis project:
               | https://x.com/zhou_xian_/status/1869511650782658846
        
               | grues-dinner wrote:
               | Claude doesn't usually produce code that actually works
               | though. Passing DRC is one thing (no syntax errors).
               | Actually works is another (compiles and executes with the
               | desired effect as a complete application).
               | 
               | And you don't even get to use unit tests to check
               | correctness.
        
             | paulgerhardt wrote:
             | In agreement.
             | 
             | I think maybe the best way to get this data set is to
             | subsidize a few dozen electronics recycling centers for
             | every unique microCT scan they send you. Lease them the
             | tomography equipment. They increase their bottom line, you
             | get a huge dataset of good-to-excellent quality commercial
             | PCB designs.
        
           | grues-dinner wrote:
           | > 6 (a tiny minority)
           | 
           | I don't think that's true. Perhaps by number of PCBs made 2
           | and 4 layers dominate: all those IoT doohickeys and alarm
           | clock displays. And even single layer phenolic boards. And
           | for most hobbyist work with little MCUs, 4 layers is a sweet
           | spot. But they're usually either very simple devices where
           | routing is not the problem, or they have very tight
           | application constraints where the placement has to be
           | squeezed and squeezed.
           | 
           | But much of the _effort_ in designing PCBs in industry is on
           | 6+ layers. You can probably smash out a simple smart
           | lightswitch PCB in a day. Boards with BGA SoCs, DDR, PCIe and
           | FPGAs can take whole teams months or more and have incredibly
           | complicated constraints, many of which are implicit (the very
           | simplest: put cap near pin, test points inline, make diff
           | pair via symmetrical, keep this SMPS far from sensitive
           | things, and make the inductor loop as tiny as you can). There
           | are a million ways to make a PCB pass DRC and still be a
           | completely non-functional device. In particular, routing is
           | secondary in terms of effort and importance to placement.
           | 
           | If you sample what a random PCB engineer is working on, it's
           | quite likely to be lots of layers, or an extremely tight
           | layout, and probably both. Or something weird and
           | application-dependent like high voltage. And they're probably
           | fiddling with placement at the random sample time you choose.
           | 
           | Toy examples of sparse layouts like mechanical keyboards and
           | DIP ICs are very unrepresentative of where the real effort,
           | and money, goes.
        
       | swayvil wrote:
       | Don't diss monte carlo.
       | 
       | 10000000000 random guesses can be way faster than a clever
       | calculation.
       | 
       | And maybe there is no clever calculation.
        
         | TheHideout wrote:
         | Agreed, if you aren't using Monte Carlo methods in your
         | algorithms then your problem probably isn't hard enough, or
         | your solutions are fragile to variance.
        
       | ChrisGammell wrote:
       | I'm generally in the "never trust the autorouter" camp (and by
       | extension "never trust the bevy of AI tools entering the space"),
       | but it's undeniable there are some big opportunities in the eCAD
       | space to speed up some aspects of layout. I'm probably more
       | likely to use co-creation tools, rather than full auto tools,
       | simply because I rely heavily on iteration. Many times when
       | starting a design my placements aren't set and that has a huge
       | impact on routing. I didn't see on your page whether placement
       | was part of your algorithm. I already rely on tools like push and
       | shove and occasionally auto complete.
       | 
       | I'm always curious about people entering the space though. It is
       | a small market, IMO. The tools are fractured, the existing
       | players are lumbering behemoths, and the users are cranky zealots
       | (you will have to pry KiCad out of my cold dead hands). I noted
       | the step about the AR being written in JavaScript, which I have
       | no real opinion on, but I'm curious about plans to plug into
       | ecosystems (CAD vendors, OS tools) or try and draw people into
       | another new ecosystem.
        
         | seveibar wrote:
         | We will absolutely be supporting KiCad! Placement is something
         | we have big plans for but I think it's important to have a
         | really fast, cache-friendly autorouter as a foundation (if you
         | are cache friendly, moving components and trying different
         | layouts is much much faster)
         | 
         | Javascript is fairly portable now with many runtimes (some even
         | quite small such as QuickJS or Proffor), so I anticipate people
         | will be able to run locally and build their own ginormous cache
         | :)
         | 
         | I think everyone should be concerned about lockin and
         | fragmenting ecosystems in EDA, but tscircuit and our autorouter
         | are MIT-permissive technology (very rare in EDA) so we play
         | nice (interoperable) with everyone.
        
           | _fizz_buzz_ wrote:
           | How are you planning to support kicad? Would it be using
           | kicad's new IPC API? In principle it should be possible to
           | create javascript bindings for the new protobuf interface.
        
             | seveibar wrote:
             | We are looking hard at the IPC interface or doing a plugin,
             | but we will also support "uploading" kicad_sch/kicad_pcb to
             | a browser (local-first, so not actually uploading)
        
           | cushychicken wrote:
           | Hey - take a look at my other reply to u/ChrisGammell - I
           | have some input on your project you might find useful that I
           | don't feel like retyping lol
        
           | micw wrote:
           | > We will absolutely be supporting KiCad
           | 
           | I wonder why there's no standard API for autorouting so that
           | a particular EDA must be supported. I'd love to see
           | autorouter as a standardized HTTP endpoint so that I can run
           | one locally or even buy Autorouter-As-A-Service. E.g. I'd
           | happily pay for TopoR1 as a service for some projects while
           | others are fine with classic rouing.
           | 
           | 1) https://de.wikipedia.org/wiki/TopoR
        
             | seveibar wrote:
             | Software innovation in EDA seems pretty slow and is more
             | IP-sensitive, so there hasn't been a huge proliferation of
             | cloud services. We're trying to bring some up some
             | standards that make EDA more web-friendly, like the Circuit
             | JSON[1] format and the Simple Route JSON format[2], but
             | it'll still be years before the tools support web-first
             | standards (kind of reminds me of png vs webp)
             | 
             | [1] https://github.com/tscircuit/circuit-json
             | 
             | [2] https://docs.tscircuit.com/advanced/simple-route-json
        
             | buescher wrote:
             | The de facto standard external autorouter workflow is
             | through specctra files.
        
         | Joel_Mckay wrote:
         | For any benefit the auto-router adds, it usually ends up
         | costing a project later...
         | 
         | In general, impedance controlled UHF design work with domain
         | specific simulation tools is the modern workflow. Thus, one
         | manually does the critical tracks first, island-pours, and
         | finally power interconnects.
         | 
         | KiCad is slightly better than nothing for layout, and trying to
         | make it into yet another half-baked simulation tool seemed
         | silly. =3
        
           | cushychicken wrote:
           | I wish the Kicad team had sunk all the time they put into
           | simulation support (which I rarely use, in or out of kicad)
           | into a proper constraint system instead.
           | 
           | If I need to sim, I'll just use LTspice or TI TINA - I.e. the
           | simulation ecosystem that includes the part model I need,
           | with no wrangling to get that spice model to play nicely in
           | some other spice toolchain.
        
             | Joel_Mckay wrote:
             | Micro-cap 12 is still available, was made free, and runs in
             | Wine64 just fine.
             | 
             | https://web.archive.org/web/20230214034946/http://www.spect
             | r...
             | 
             | For learning, the current LTSpice models and QUCS VNA
             | profiled device imported simulation data are reasonable
             | tools as well.
             | 
             | I used KiCad all the time for personal projects, and it has
             | recently already addressed the major issues people found
             | annoying:
             | 
             | 1. unstable library version dependency management
             | 
             | 2. complex design collaboration over traditional versioning
             | tools like git (still an upstream feature last I checked,
             | but should be available soon.)
             | 
             | 3. freezing the specific KiCad and library path structure
             | versions across a project, as assuming every design has a
             | single workstation workflow is a contradictory goal
             | 
             | KiCad is better than most current EDA software, but that is
             | mostly a sad reflection on the traditional tools available.
             | =3
        
             | _fizz_buzz_ wrote:
             | I love the ngspice integration in kicad it has gotten
             | really good since kicad 7 or 8.
        
         | cushychicken wrote:
         | _The tools are fractured, the existing players are lumbering
         | behemoths, and the users are cranky zealots (you will have to
         | pry KiCad out of my cold dead hands)._
         | 
         | These last five years have been _absolutely incredible_ to
         | watch in Kicad's development. Last two releases have added the
         | two major things that Kicad didn't have that professional CAD
         | tools did:
         | 
         | * database support
         | 
         | * outjob features
         | 
         | Beyond that, it's really just a matter of adoption and end user
         | preference of how to put these features to work. (Databases
         | tend to come with a lot of internal corporate bureaucracy
         | around organizing data than anything else.)
         | 
         | But, for the topic at hand: don't you think Kicad is already
         | moving _sort of_ in the direction you talk about, as far as
         | workflow to speed up layout is concerned?
         | 
         | Best example I can think of is the "trace autocomplete" feature
         | in I think 7.0. Hit hotkey (I think it's F in pcbnew) and the
         | trace will be laid for the track currently being placed.
         | Combine this with the "route from other end of track" (hotkey
         | E), this is a _blazing_ productivity boost, especially when
         | you're working across two different ball out grids.
         | 
         | Version 9 enabled buses/multiple tracks to be draggable,
         | meaning this whole system could go even faster still.
         | 
         |  _Many times when starting a design my placements aren 't set
         | and that has a huge impact on routing._
         | 
         | Honestly, I'd trust an autorouter to complete a good chunk of
         | my designs if I can get to a placement I'm satisfied with, and
         | I can give the autorouter some constraints on where to route.
         | For example: did a board last year using an NXP iMX8MP with an
         | eMMC. The peripheral ballout on the processor matches the eMMC
         | ballout really nicely - was just a matter of lining the chips
         | up and drawing the lines. An autorouter could have done in
         | seconds what it took me ~10 minutes if it had known to keep
         | everything in the data bus on the top layer.
         | 
         | I think that's a success criteria autorouter projects suffer
         | from: they don't consider their autorouter "done" unless their
         | router can do _everything_ on a board. As a working EE, _I
         | don't actually want that_. I want an autorouter that can work
         | with me to do a satisfactory job on a little chunk of the
         | design at a time, give me some time to review it for
         | correctness, then move on to the next chunk.
         | 
         |  _That_ would be killer. Especially if you could give
         | constraints beyond layers: I.e "keep all nets with names D0-7
         | on layer one and three, and match their length to 5mm of one
         | another. Use D0 as the net length reference." If you can do
         | that, then boom: you've solved DRAM length tuning. Huge range
         | of design complexity now within reach of the unwashed masses.
         | 
         | OP - if I can find some time to show you what I mean, I'd be
         | more than happy to give you a demo of what I'm talking about.
        
           | seveibar wrote:
           | Hey cushychicken! Would love to see what you're working on
           | and thanks for explaining (seveibar at tscircuit dot com is
           | my email if you want to set something up!)
           | 
           | I do think humans need to be in the loop on autorouters, but
           | I also think autorouters should be very fast and decent. In
           | web design we progressively add constraints (via CSS) to
           | tweak the output of the visual design and see the result
           | instantly, I believe this same workflow is possible for PCB
           | design (and will favor those good at specifying constraints!)
        
             | cushychicken wrote:
             | Thanks for the reply. I'll get in touch via email in the
             | next few days.
             | 
             | I don't think the goals of "human in the loop" and "fast
             | and decent" are mutually exclusive, or really even at odds
             | with each other.
             | 
             | I just think that most autorouters I've tried seem to have
             | been written by CS people who thought PCB layout was an
             | interesting problem to solve - without a particularly deep
             | understanding of what working EEs care about in terms of
             | _how_ you route the board.
             | 
             | Case in point: plenty of autorouters seem to work on the
             | paradigm of "all tracks routed? Great. I'm done." Never
             | mind that everything is two layers of spaghetti: no ground
             | plane, no power planes, no regard for SI or EMC compliance.
             | In other words: pure hell to test and certify.
             | 
             | Not trying to be crotchety EE by saying this - I can give a
             | good example of what I mean in real time. I also feel like
             | I'm a bit of the exception in the EE community in that I
             | think this is a solvable problem. I just don't think many
             | CS people have really bothered to consult any real working
             | EEs about their workflows and the downstream portion of
             | test and certification.
        
         | rjsw wrote:
         | I once had to do bringup on an autorouted prototype PCB. The
         | traces between the CPU and DRAM looped three times round the
         | board.
        
         | buescher wrote:
         | Years ago the long-gone and unlamented OrCAD Layout had a
         | spreadsheet view of your nets that was a slightly better than
         | good-enough interface for setting constraints for autorouting.
         | Once you got your footprints, placement, constraints, and hand-
         | routed nets locked down, you could iterate very quickly.
         | 
         | It's nice to see anyone at all working on PCB autorouters,
         | which have been pretty stagnant since Cadence bought SPECCTRA
         | in the nineties. The guys who wrote SPECCTRA went to the VLSI
         | world iirc and never came back - I guess that's where all the
         | fame and fortune is. Probably was a patent minefield for a
         | while there too; might still be. Autoplacement was an utterly
         | intractable problem back then, still seems to be, and it's
         | probably ripe for a generative AI approach - a good generative
         | AI first pass at parts placement could be a net time saver. The
         | biggest problem would be convincing the cranks that you can be
         | good even if you can't be perfect.
         | 
         | I'm sort of puzzled by the kids out there trying to do
         | schematics-as-code. On the one hand, I would love to see that
         | work as a backend format - especially where folks like the jitx
         | guys seem to be making a lot of progress in encoding appnote
         | and datasheet-level design rules into parts models. Reading
         | every datasheet in the detail needed for commercial designs is
         | more work than people expect, and so is getting junior
         | engineers on board with the process. So any automation there is
         | beneficial. The problem is that the approaches all seem rooted
         | in this idea of schematics as data entry for layout, a kind of
         | source code, rather than as design documentation with a
         | carefully evolved visual language that needs to be accessible
         | to people who won't have an EDA suite installed on their
         | computer. I guess people who learned from puzzling out
         | schematics in the Adafruit/Sparkfun/Shenzhen style with
         | explicit wiring minimized might not understand the value of a
         | good schematic.
         | 
         | The other thing is leaning too far into thinking-by-analogy and
         | trying to make PCB-level design like VLSI design. I don't think
         | it's entirely impossible - if we had better tools for DRC and
         | validation, component-level design would look more like VLSI
         | design. But the design space is so different, with such loose
         | coupling between design, EDA/CAM/simulation, validation,
         | fabricators, assemblers, component vendors,
         | regulators/certifiers, and so on that just getting one corner
         | of it really right would be a major accomplishment.
        
       | nine_k wrote:
       | This is great. As someone not working directly on 2D / 3D space
       | problems, my biggest take-away is the value of visualizing
       | things. Humans are really good at grasping and analyzing
       | pictures. Another is the idea to use a stochastic or brute-force
       | method to understand the shape of the problem, so to say, and
       | choose a better method based on that, not from a theoretic
       | understanding alone.
        
       | Lerc wrote:
       | I am a strong proponent of visualisations. I also have a tendency
       | to write things in JavaScript that others might choose a
       | different language. I'm now wondering if those things are
       | related. Having the browser to provide an interface can make that
       | a lot easier. Recently I have been doing a fair bit of
       | transformer and NNet stuff so consequently Python. I have found
       | it a bit of a step back when it comes to visualisations. It's
       | easy to used canned techniques to plot data, but not as easy to
       | knock up a custom interactive visualisation.
       | 
       | Hmm, I wonder what python raylib is like, that might be the
       | ticket.
        
         | rocqua wrote:
         | This is why I use python with Jupyter notebooks.
         | 
         | Though admittedly animations are quite arduous.
        
         | misiek08 wrote:
         | I'd love to see what is the easiest way, for a hardcore backend
         | guy with completely dumb part of brain responsible for visuals,
         | to quickly visualize things in JS. I would pay for a "d3/p5
         | tutorial".
        
           | seveibar wrote:
           | Author here. I would recommend booting up React Cosmos[1] and
           | prompting Claude to "generate a react app that...". You can
           | visualize inside of Claude then when you're finished, drop
           | the output of Claude into React Cosmos as a file. This is
           | great for prototyping algorithms and recording the
           | progression.
           | 
           | For example you might say "generate an interactive react app
           | that implements a simple pathfinder that uses A* with a
           | penalty for going near edges". You can also just drop in a
           | paper or pseudo code of an algorithm you're ideating on.
           | 
           | Here's an example how we use react cosmos for our
           | autorouter[2]
           | 
           | [1] https://reactcosmos.org/docs [2] https://unraveller.verce
           | l.app/?fixtureId=%7B%22path%22%3A%22...
        
         | WillAdams wrote:
         | Visualization is actually one of the reasons I've been
         | successful with my current project (creating a 3D modeling
         | library which uses 3D tool representations and movement as
         | afforded by G-code to write out G-code or DXFs) --- the tool
         | which makes it possible is:
         | 
         | https://pythonscad.org/
         | 
         | Being able to just plot something and see the algorithm on-
         | screen has been an incredible benison.
         | 
         | I'm hopeful OpenPythonSCAD will be merged into OpenSCAD
         | presently.
        
           | opwieurposiu wrote:
           | So you are working on a way to generate toolpath gcode from
           | python? I need that.
           | 
           | I have a python workflow for my laser cutter, I can make a
           | drawing with drawsvg and then cut it with lightburn.
           | 
           | I have used build123d to make 3d models in python but not yet
           | found a way to generate toolpaths for my milling machine.
        
             | WillAdams wrote:
             | Yes.
             | 
             | Give the code a try!
             | 
             | https://github.com/WillAdams/gcodepreview
             | 
             | If you can describe a simple part I can work it up for you
             | and add it to the examples.
             | 
             | EDIT: Alternately, try pyCAM, or Kiri:Moto w/ an STL?
        
       | MrLeap wrote:
       | Almost everything matches my gamedev heuristics. I even have
       | empathy for choosing JS. I'm making a game modding framework
       | right now that operates on a lispy kind of s-expressions.
       | Optimizing to accelerate creative iteration time > *, I've found.
       | 
       | A*, Lee's algorithm and the like are all cool. It's criminal to
       | write any kind of floodfill without having an accompanying
       | visualization, you're squandering so much dopamine.
       | 
       | This article has me wondering if all the gamedev things I
       | *didn't* read about but are adjacent have utility in this kind of
       | thing. I can't be the first person to think a boids router would
       | be pretty fun. More seriously, I bet jumpflooding signed distance
       | fields would provide you a lot of power.
       | 
       | Everything about spatial hashing in particular matches my
       | experience. Haven't found many occurences in almost 2 decades
       | where any of the tree structures are worth the time. One notable
       | exception. The lovecraftian text editor I made uses quite a lot
       | of trie's for reactivity things. Nice way to compress 45,000
       | words into a compact state machine for event handling.
        
         | seveibar wrote:
         | It is a really fun idea to build a boids router (shelving that
         | for a future article!) I wrote previously about recursive
         | pattern autorouters which are really good at having small
         | solution spaces (and therefore, easier to get conventional
         | machine learning algorithms to predict). There are so many
         | interesting unexplored areas in autorouting!!
         | 
         | I hadn't heard of jumpflooding (for others: fast, parallel
         | algorithm for approximating distance fields), that could
         | definitely be interesting, thanks for the tip!
        
         | shadowgovt wrote:
         | I think the trees were a lot more useful in the past when
         | memory and caches were smaller (and I suspect they can still be
         | useful for precomputation, though I'd have to sit down and
         | benchmark fixed-grid-with-smart-sizing vs. tree). Trees are
         | also amendable to recursive algorithms but the author has noted
         | that they have reasons to choose iterative over recursive
         | algorithms, so these pieces of advice synergize.
         | 
         | (It is perhaps worth noting: broadly speaking, "recursive" vs.
         | "non-recursive" is a bit of an artificial distinction. The real
         | question is "does a pre-baked algorithm with rigid rules handle
         | flow control, or do you?" If you care about performance _a lot_
         | , you want the answer to be you, so having your run state
         | abstracted away into an execution-environment-provided stack
         | that you can't easily mutate weirdly at runtime begins to get
         | in your way).
        
       | kayson wrote:
       | I want to love hardware as code. I really do. I love
       | infrastructure as code, devops as codezyou nake it. But I just
       | don't think it's ever going to happen. At least not fully.
       | Something about it to me just has to be graphical. Maybe it's the
       | information density, or the fact that circuits are graphs /
       | networks, or maybe it's just habit.
       | 
       | On the topic of autorouters, the post doesn't really go into why
       | there are no open source data sets and why there aren't many open
       | source tools. It's probably no surprise but there's a TON of
       | money in it. The author mentions iphones and 10s of thousands of
       | routes. That's nothing. Modern ICs have billions of transistors
       | and billions of routes with an insanely complicated rule set
       | governing pitch, spacing, jobs, etc. Not to mention the entire
       | time you have to make sure the parasitic resistance and
       | capacitance from the routing doesn't break the timing
       | assumptions.
       | 
       | Cadence and Synopsys probably put billions yearly into R&D and I
       | bet a big chunk goes into "Place & Route". I've heard that
       | licenses for their tools run into the miions of dollars per head
       | per year.
        
         | leoedin wrote:
         | I had a play with the tscircuit CAD tool which they're writing
         | this autorouter for. It's hardware-as-code, leveraging React.
         | 
         | I love the concept of schematics-as-code. But layout-as-code is
         | awful. Maybe AI tools will improve it enough - but unless
         | there's a user friendly escape hatch which lets you tweak
         | things with a mouse, it's a dead end.
         | 
         | I've had the same problem with auto-generated wiring diagrams
         | and flowcharts. Great in theory, but then the tool generates
         | some ugly thing you can't make sense of and there's no way to
         | change it.
         | 
         | There's definitely tremendous potential for AI in electronic
         | design. But I think the really complicated bit is component
         | selection and library management - you have to somehow keep
         | track of component availability, size and a range of parameters
         | which are hidden on page 7 of a PDF datasheet. Not to mention
         | separate PDF application notes and land patterns.
         | 
         | Compared to all that, connecting up schematic lines and laying
         | out a PCB is fairly straight forward.
        
           | seveibar wrote:
           | Love that you tried the tool and couldn't agree more. We
           | think that new layout paradigms similar to flex and CSS grid
           | need to be invented for circuit layout. This is an active
           | area of research for us!
        
       | aranchelk wrote:
       | > 2. Implementation Language doesn't matter
       | 
       | > I'm controversially writing our autorouter in Javascript. This
       | is the first thing people call out, but it's not as unreasonable
       | as you might expect. Consider that when optimizing an algorithm,
       | you're basically looking at improving two things:
       | 
       | > Lowering the number of iterations required (make the algorithm
       | smart) Increasing the speed of each iteration
       | 
       | It may be true in this domain, I wouldn't know, but applied to
       | software engineering in general IMO it would be a massively
       | incorrect assumption to say choice of language doesn't affect
       | speed and needed number of iterations.
        
         | bigiain wrote:
         | I think there's a fair argument to be made that when you're
         | chasing Big O algorithmic improvements, then the effective
         | constant terms incurred by "faster or slower language
         | execution" are premature optimisation. The difference between
         | Rust or hardcoded assembler compared to Javascript or
         | VisualBasic are pretty irrelevant if you're still trying to get
         | your exponent or polynomial terms under control.
        
           | HdS84 wrote:
           | Yep. Also, js is easy to iterate on - it's more lenient than
           | rust or say c#. Especially if you are exploring thinks,
           | that's a huge boon. Obviously, for the same algorithm,
           | compiled languages will be slower. Does it matter? Maybe. But
           | time to find this algorithm is also critical - and if js
           | helps here, it's a good choice.
        
           | kalaksi wrote:
           | But what about afterwards? You move to a more performant
           | language or just accept that you're now invested in JS?
        
             | andrewaylett wrote:
             | One or the other, probably -- Facebook were sufficiently
             | invested in PHP that they released a whole new VM for the
             | language. In Python, one would relatively commonly extract
             | the performance-critical sections into C (or, nowadays,
             | Rust) and you could do the same with Node too. Or the JIT
             | might be fast enough that it's not worthwhile.
        
           | windward wrote:
           | The argument falls apart when the premise that you're chasing
           | big O does.
           | 
           | Poor cache/memory/disk accesses can result in constant
           | regressions so vast that a 'worse' algorithm actually fares
           | better. Also, we tend to falsely settle on referring to 'O'
           | rather than omega, theta, and average, even when we don't
           | care about worse-case or contrived workloads. See quicksort
           | and mergesort.
           | 
           | For a similar concept in another domain, see also the
           | external memory model:
           | 
           | https://en.algorithmica.org/hpc/external-memory/model/
        
         | imtringued wrote:
         | I think Javascript might doom the autorouter to small scale
         | designs or very long processing times, but I have never used
         | tscircuit so maybe I'm wrong.
        
           | seveibar wrote:
           | Believe in the cache! There's a reason most websites can be
           | written in javascript now and not have major issues, the I/O
           | bottleneck is more important than the raw computation. As
           | long as the algorithm is cacheable (and this means each stage
           | maintains partial/spatial caches!) the implementation
           | language shouldn't be very important.
           | 
           | There are specific NP Hard problems that aren't spatially
           | cacheable and we may need WASM for, but these problems tend
           | to be nearly impossible for humans as well, so PCB designers
           | will opt to just add some space on the board or add
           | additional layers.
        
       | sitkack wrote:
       | What a wonderfully written post! Much respect.
        
       | djmips wrote:
       | This blog post sings to me. Well done. I remember the time a
       | colleague effectively used A* as the optimizer for the AI's plans
       | in 2 player game. Indeed it really can be purposed for a lot of
       | different things!
        
       | weinzierl wrote:
       | _" Know A* like the back of your hand, use it everywhere"_
       | 
       | Are state-of-the-art autorouters still based on A*?
       | 
       | From what I remember A* was used in the 80s before Specctra
       | introduced a cost based push and shove capable approach in the
       | mid 90s.
       | 
       | A* being what it is, is probably still a part of contemporary
       | solutions, but I am surprised about the emphasis it got in the
       | post.*
        
       | kalaksi wrote:
       | > 2. Implementation Language doesn't matter
       | 
       | I'm not sure how performant modern day JS is but aren't you still
       | leaving a lot of performance improvements on the table? Sure,
       | algorithm may matter more but who says you can't use the same
       | algorithm.
        
       | n4r9 wrote:
       | Article makes some important points - especially re visualisation
       | and cache effects.
       | 
       | Some quibbles or glossovers:
       | 
       | > A recursive algorithm is a depth first search. Any loop that
       | explores candidates/neighbors without sorting the candidates is a
       | BFS.
       | 
       | Not sure what to say about this. It's either wrong or I'm missing
       | the intuition. Both DFS and BFS can either be written iteratively
       | or recursively; the real difference is whether you pop your next
       | candidate from the top or the bottom of the stack. Equivalently,
       | whether you use a stack (FILO) or a queue (FIFO). That said, I
       | took maths rather than CS so verify this for yourself.
       | 
       | > It is simply the best foundation for any kind of informed
       | search (not just for 2d grids!)
       | 
       | A* is useful in pathfinding if you have some notion of easily-
       | computed "distance" to your desired target and you're only
       | running a few queries for any given graph. If you plan to run
       | many queries on a mostly-static graph (like a road network) then
       | you could be better off applying a pre-processing algorithm like
       | contraction heirarchies. If you're optimising but don't know what
       | target you're aiming for (e.g. Traveling Salesman) then using
       | some other local search heuristic like 2-opt may be better.
       | 
       | > The major difference between these [A* and BFS] is BFS explores
       | all adjacent nodes, while A* prioritizes exploring nodes that are
       | closer to the destination.
       | 
       | This is definitely _a_ difference. But it glosses over the
       | biggest difference, which is that A* is a dynamic algorithm. That
       | allows you to terminate early with confidence that you have found
       | the shortest path. With BFS you may not be certain until you 've
       | searched the whole graph, which could be vast.
        
         | hansvm wrote:
         | > recursive == DFS
         | 
         | The intuition there is that people tend to write algorithms
         | recursively when they easily map to interactions with the top
         | of the stack, since that's easier to express in most languages
         | than bringing in an external stack to think about. Hence, if
         | you see something recursive in the wild it likely looks more
         | like DFS than BFS. As you point out, that isn't a hard rule.
        
         | thequux wrote:
         | Indeed, BFS, DFS, and A* are the same algorithm just with a
         | different data structure to track unexplored nodes. BFS uses a
         | FIFO queue, DFS uses a LIFO stack, and A* uses a priority queue
         | (generally a heap)
        
           | n4r9 wrote:
           | Yeah. And A* simply generalises Dijkstra's algorithm by
           | adding a distance heuristic to the priority expression.
        
           | o11c wrote:
           | A* is also importantly different for what the ordering of the
           | priority queue is. Dijkstra's algorithm is just BFS with a
           | priority queue, but the lack of heuristic in the comparison
           | key keeps it clearly breadth-first.
        
         | Karliss wrote:
         | No you don't have to search whole graph with BFS (it might
         | happen if source and target is on opposite corners), first time
         | you reach a node you know 100% that it is the shortest path.
         | That's one of basic invariants allowing BFS to produce correct
         | result, you can terminate early when all targets are reached. A
         | difference between A* and BFS is that instead of searching
         | shortest path between two points BFS finds shortest path from
         | single point to ALL points in graph. A* makes a tradeof of
         | speeding up individual query by answering to weaker question.
         | When problem allows it replacing thousands of A* calls by
         | single BFS or Dijkstra call can give a major speedup. Another
         | important difference is that BFS only works on graphs where all
         | edges have same length, A* supports mixed edge lengths. They
         | are not interchangeable, just like finding minimum element in a
         | list isn't replacement for sorting the list.
        
           | o11c wrote:
           | This is wrong or at least misleading in a couple of ways:
           | 
           | A* is not limited to point-to-point, it also handles point-
           | to-set-of-points just fine (you just might have to reverse
           | the direction you're thinking in). Although this is most
           | often used for "we only need a path to one of them", it's
           | still faster even if you need a path to all of them due to
           | merging. Visiting _all_ points (as BFS does) on the graph is
           | rarely what you actually want (though admittedly, if you 're
           | able to throw away the grid it becomes more likely; like the
           | article, I do find people use grids far too often).
           | 
           | BFS works just fine with variable edge weights, you just have
           | to use a priority queue instead of a plain queue for the next
           | set of nodes to be visited.
        
             | roetlich wrote:
             | > BFS works just fine with variable edge weights, you just
             | have to use a priority queue instead of a plain queue for
             | the next set of nodes to be visited.
             | 
             | But usually you don't call it BFS if it's using a priority
             | queue?
        
           | n4r9 wrote:
           | > first time you reach a node you know 100% that it is the
           | shortest path
           | 
           | As you note later in your comment, this is only true in the
           | case of uniform edge weights. In general you can come up with
           | easy counterexamples e.g. a very long direct edge from source
           | to target vs lots of short edges which sum to a smaller
           | amount.
        
       | hyperbrainer wrote:
       | > > A recursive algorithm is a depth first search. Any loop that
       | explores candidates/neighbors without sorting the candidates is a
       | BFS.
       | 
       | BFS is also a recursive search. Even in the case of non-recursive
       | search, the only difference is whether you use a queue or stack.
       | 
       | Apart from that, great article.
        
         | fire_lake wrote:
         | And even then, execution depends on your language and compiler.
         | 
         | Write a recursive BFS in Haskell and it won't blow up the
         | stack.
        
           | hyperbrainer wrote:
           | Any language with decent TCO won't do that. Python is the
           | only big language that I can think of that doesn't do it.
        
             | fire_lake wrote:
             | Guaranteed TCO is pretty rare unfortunately.
             | 
             | Java, Go, JavaScript all lack it.
        
       | tobyhinloopen wrote:
       | If you're navigating on a 2D grid, Jump Point Search "Plus" can
       | be extremely fast:
       | 
       | https://www.gameaipro.com/GameAIPro2/GameAIPro2_Chapter14_JP...
        
       | jofla_net wrote:
       | Wish i had read #3 a few years ago before doing one of those
       | 'interview' 'projects'.
        
       | dkjaudyeqooe wrote:
       | > The QuadTree and every general-purpose tree data structure are
       | insanely slow. Trees are not an informed representation of your
       | data.
       | 
       | > Any time you're using a tree you're ignoring an O(~1) hash
       | algorithm for a more complicated O(log(N)) algorithm
       | 
       | This is incredibly misguided. The hashing approach is fine if
       | your points are evenly distributed and you only really want to
       | query an area relatively close to the fixed subdivisions you've
       | chosen, otherwise your 0(1) is going to degenerate to O(n).
       | 
       | Trees are an informed representation when you don't know how your
       | data is distributed.
       | 
       | Similar story for randomized algorithms. What happens when your
       | search space is many trillions of items or possibilities? Or
       | there are no heuristics to be had? If you can't brute force it
       | and you can't use a clever algorithm then randomized algorithms
       | are a savior.
       | 
       | Maybe these aren't needed for this specific application, but best
       | to not make sweeping statements.
        
         | rtpg wrote:
         | Measure measure measure measure. Each case is different.
         | 
         | But more seriously I think tree based algos tend to be
         | overhyped, and people get a bit absorbed into thinking about
         | big-O behavior when the constant factors are super important
         | even when you're looking at hundreds of thousands of elements.
         | Not to mention stuff like data locality. Like sometimes your
         | computer will be faster at just looking in a seq scan rather
         | than dealing with the bookkeeping of a fancier structure.
         | Sometimes.
         | 
         | I think a nicer argument overall is to build a tiny wrapper
         | around your operations, build out what is easy, and then figure
         | it out through measurements.
         | 
         | Worst case you end up entirely rewriting your program to
         | accommodate a different structure to try for better perf but in
         | my experience rewriting entire files from scratch tends to get
         | you a good number of improvements for free.
        
           | dkjaudyeqooe wrote:
           | Sequential scans are terribly underrated given today's
           | architectures. If the datum is small and it's like 1000
           | elements max it's going to be hard to beat a simple scan of
           | an array for speed and memory efficiency.
           | 
           | But trees et al also make a lot of sense as a default. If you
           | need the performance to scale in a roughly linear manner you
           | need to be smart about it. This is what sinks a lot of
           | software, usability wise.
        
             | seveibar wrote:
             | Author here. This was almost worth a mention in the
             | article. Every once in a while I'd have a friend take a
             | look at a piece of my algorithm and they'd suggest swapping
             | out a sort() function for a priority queue etc. (without
             | looking at the performance data) and find no change or
             | sometimes even a slightly worse result! This is why having
             | performance data is so critical, the performance of a
             | complex algorithm can be counter-intuitive and most
             | performance "gotchas" are just straight up mistakes or
             | recalculations rather than fundamental algorithm issues
             | (remember the GTA5 bug that caused the loading screen to
             | take an extra 5 seconds, and was simple enough for a random
             | dev to patch? that is the most common type of performance
             | issue in development!)
        
         | phkahler wrote:
         | For 3D I found octtrees very effective and fast. In my
         | implementation you can even move items around without having to
         | regenerate the tree.
         | 
         | I have yet to find a satisfactory way to store points (in 2d or
         | 3d) and be able to query nearby points. a kD tree is nice, but
         | I want to add points as I go, not build a structure around a
         | fixed set.
        
           | o11c wrote:
           | I suspect that thinking in trie terms is the way to go, where
           | each node chooses its preferred representation.
           | 
           | The top level should usually use a grid, though uniform-
           | density children can also use one. The grid scale should be
           | chosen to maximize low-density children while minimizing
           | empty children.
           | 
           | For low-density children, just store an unsorted bunch of
           | items, or maybe sort them on just one axis (preferably the
           | outermost axis used for rendering). If your outer level is
           | actually uniform, _all_ of its children will be this (or
           | empty).
           | 
           | For high-density children, use a quadtree/octree ... or just
           | treat them as opaque objects to be handled recursively. These
           | still suck for all the memory-walking they do, but since
           | you've handled the outer part using the grid, the overhead is
           | smaller. These _can_ do  "nearby" queries if your
           | implementation is good, but many implementations suck. Making
           | them mutable just means you need to either implement
           | rebalancing or just stick to fixed planes.
        
       | GistNoesis wrote:
       | Point 8, the fast dismissal of Monte-Carlo method is a huge
       | blunder.
       | 
       | The point of Monte-Carlo method is that you can tradeoff accuracy
       | for speed. The longer you run your algorithm the more accurate
       | you get.
       | 
       | But what's even more interesting is that we can often use the
       | contrapositive : You can get shitty accurate result real fast.
       | 
       | Instead of exploring all paths, you explore only one path picked
       | at random.
       | 
       | And that's where it shines : when you put it into the most
       | imbricated loop of your algorithm.
       | 
       | For example when you want to learn a neural network which
       | autoroute. Typically you would have the outerloop which is
       | updating the neural network parameter, and the inner-loop which
       | is computing a path through the graph.
       | 
       | When you use Monte-Carlo this inner-loop which control accuracy
       | can be reduced to 1 iteration if everything is bias-less, you
       | will have a slower outerloop due to higher variance but the
       | machine learning should """theoretically""" learn.
       | 
       | This is what allows you to build policies which intuitively
       | always pick the right decision. Like in chess or go, you have
       | some monte-carlo tree search variant like alpha-go-zero or alpha-
       | chess-zero or alpha-router-zero, where even without the search
       | part, the giant cache (encoded by the neural network parameters),
       | once trained, can compute your best guess path in one pass
       | through your neural network aka constant time (with a constant
       | which can easily trade memory for speed by augmenting the number
       | of parameters or training for longer).
        
         | ttyprintk wrote:
         | Yet the author mentioned simulated annealing, so was probably
         | not even trying a neural net, since SA does not compute a
         | gradient.
        
           | ttul wrote:
           | Simulated annealing has long been one of the classic
           | metaheuristic methods applied to autorouting in circuit
           | design. It was widely used in early research and some
           | commercial tools because its probabilistic approach helps
           | escape local minima in these NP-hard problems. Today, while
           | many modern autorouters in industry tend to favor faster or
           | more specialized hybrid methods, simulated annealing remains
           | a common benchmark in academic studies and research
           | prototypes for routing and related tasks like floorplanning.
           | 
           | Source: I once wrote a simulated annealer for VLSI wiring. It
           | was cool.
        
             | opwieurposiu wrote:
             | Simulated annealing works great for placing map labels.
             | Lets say you have to label POI with flags, you do not want
             | the flags to overlap and you can only place the labels NSEW
             | of the POI.
             | 
             | 1. Plop the labels down in random* directions.
             | 
             | 2. Calculate the total overlap.
             | 
             | 3. Randomly* move 10% of the labels.
             | 
             | 4. Calculate the total overlap again, If the overlap is
             | improved, keep your changes, If things got worse, then
             | backtrack.
             | 
             | 5. Sometimes you randomly keep changes that make things
             | worse, this is the annealing part. This is how you escape
             | local minima.
             | 
             | * The randomness does not have to be 100%, you can try
             | smartly avoiding nearest neighbors 2/3 of the time and then
             | do a random move 1/3 of the time. If you always try and be
             | smart you will tend get trapped in a local minima.
        
         | ur-whale wrote:
         | > Point 8, the fast dismissal of Monte-Carlo method is a huge
         | blunder.
         | 
         | Yeah, I had the exact same reaction when I read the article.
         | 
         | MC is the "keeping it real" algorithm, slow, but almost always
         | dead simple to implement and that can be trusted with super
         | high certainty to double-check that you've not completely
         | wandered off in the weeds.
        
           | bee_rider wrote:
           | Which is sort of funny given the intuitive description of
           | Monte Carlo--a random wander, possibly into the weeds.
        
         | throwaway-blaze wrote:
         | I was this many days old when I learned the word "imbricated".
        
           | GistNoesis wrote:
           | Yeah you're right, I should have said nested (it is a valid
           | Anglicism from French "imbrique(es)" )
        
             | burnished wrote:
             | Nothing wrong with inadvertently teaching folk new words,
             | it was an uncommon and welcome surprise for me
        
       | xg15 wrote:
       | I want to thank this guy for the "avoid Monte Carlo" point and
       | the big "no randomness" image.
       | 
       | I feel people are getting way to comfortable throwing randomness
       | at things, pretending it's perfectly fine or even beneficial
       | because you can reason over the probability distributions and
       | then shrugging their shoulders when the results have
       | unpredictable and impossible to debug edge case behavior.
       | 
       | We already have enough unpredictable factors in a program, I feel
       | we don't have to purposefully add even more of them.
        
         | windward wrote:
         | Agreed. Adding a test that generates a random 32 bit int to
         | your project is adding 4 billion tests to your project and only
         | running one of them.
        
       | fooblaster wrote:
       | I find it very amusing how essentially all digital vlsi has been
       | autorouted for years, but no one can seemingly solve the board
       | level PCB autorouting problem. I get that one is digital, and the
       | other can be essentially anything, but the difference in practice
       | is a strange juxtaposition.
        
       | amiga386 wrote:
       | > 95% of your focus should be on reducing the number of
       | iterations. This is why language doesn't matter.
       | 
       | And then once you've used the playful, expressive (interpreted,
       | abstract, slow) language you enjoy using, to develop an
       | excellent, performant algorithm... if performance still matters,
       | write the same thing again in a performant, low-level language,
       | and perhaps even write architecture-specific assembly.
       | 
       | There's a reason numpy, pandas, OpenCV, TensorFlow aren't written
       | in pure Python, but instead let Python tell them to do things
       | they've implemented in high performance C++/assembly/CUDA, etc.
       | 
       | No matter how proud the authors would be of exploring a problem
       | space and coming up with an efficient algorithm (and blogging
       | about it), I doubt they'd be popular numerical computing
       | libraries if they insisted on it being written in pure Python, or
       | Javascript.
       | 
       | While this is a fun blog post, I don't think it'd have the same
       | takeaways if the author's algorithmic insights got a pure-
       | Javascript HEVC encoder down from 1 day per frame to 3 hours per
       | frame...
        
       | bArray wrote:
       | Does anybody have materials regarding how you would go about
       | writing an autorouter? I was thinking to maybe use something like
       | this [0] as a starting point?
       | 
       | I have a few major nags about current autorouters:
       | 
       | 1. No way to prioritise connections.
       | 
       | 2. Handling of differential pairs or buses is terrible.
       | 
       | 3. Does not use best practices for high-speed signal lines.
       | 
       | 4. No way to add arbitrary rules to the autorouting process, i.e.
       | block a trace from going through an area. For example, maybe you
       | don't want a power trace being autorouted under your IMU, but
       | some low voltage signals are fine.
       | 
       | I've use freerouting [1] in the past but it has a somewhat
       | difficult time of being maintained. Freerouting seems to really
       | struggle with larger designs, it seems to be greedily routing
       | traces and spends ages trying to fix those early placed traces.
       | 
       | [0] https://github.com/vygr/Python-PCB
       | 
       | [1] https://github.com/freerouting/freerouting
        
       | CamouflagedKiwi wrote:
       | The third point - "Spatial Hash Indexing > Tree Data Structures"
       | - is probably great in this domain, but shouldn't necessarily be
       | taken as global truth. I've optimised a system significantly
       | before by going from a grid to a tree - because if your points
       | are significantly unevenly distributed then you effectively have
       | a bad hash function, and it devolves to O(n) performance.
        
         | swiftcoder wrote:
         | Not to mention that most hashmap implementations for managed
         | languages incur ~2 cache misses per lookup. I've had an awful
         | lot of optimisation wins over the years just by ripping hash
         | maps out of software that either didn't actually need arbitrary
         | key->value lookups, or had datasets large enough that a
         | sufficiently wide tree-like structure handily beat the hashmap
        
       | thenthenthen wrote:
       | Looking forward! Feature request: would it be possible to add
       | obfuscation? This is not a solution to my issue ( it is more
       | social), but could be... interesting
        
       | shadowgovt wrote:
       | I love this article. Two things I'd consider highlighting:
       | 
       | 1. Autrouting in particular is a problem well-suited to
       | visualization and not all problems are. it's fundamentally about
       | real things in real space (bonus: they're _mostly_ constrained to
       | a 2D plane, which makes them easy to visualize on a computer
       | screen). A lot of hard and interesting problems don 't match that
       | reality and so aren't as amenable to visualization.
       | 
       | (... but the general advice, "look for opportunities to
       | visualize," stands. Your eyes and brain are _very_ fast matchers
       | for a broad variety of patterns in a way a computer isn 't
       | without a lot of special-purpose, tuned software).
       | 
       | 2. JavaScript, between the language itself (specifically the
       | ability to reflect data structures at runtime) and the ecosystem
       | around it, is very amenable to visualization. In that way
       | specifically, it was a strong choice for this problem domain.
       | Imagine how much work you'd be taking on if you decided to "just
       | bang out a quick visualization" for an arbitrary piece of the
       | algorithm in C++, or Rust.
        
       ___________________________________________________________________
       (page generated 2025-03-28 23:02 UTC)