Post ATdkQ2teyLtuROW6Lo by 8petros@petroskowo.pl
(DIR) More posts by 8petros@petroskowo.pl
(DIR) Post #ATdQF12fWtw4Q6C4oK by 8petros@petroskowo.pl
2023-03-15T09:00:48Z
0 likes, 0 repeats
If I understand the concept of #FPGA properly, it uses "generic" electronic modules (FPG arrays) that can be (reversibly?) "personalized" to behave like a custom desinged circuits.To personalize them, we need:1. Some design and development environment, where we design and test specific conigurations for given functionality. And this part is rather resource-intensive.2. Some "flashing" environment to "imprint" existing models onto generic FPGA modules. Which is much less resource-intensive than the previous part.Now, I can imagine that, for the sake of #maintenance and #resillience, I may use FPGAs to build all kinds of devices around them. Then, just in case of failure (EMP, maybe) I can safely stock a pile of generic modules, an imprinter and a library of models so I can replace whatever is broken, without looking for a specialised chip. Does it make sense and why not? ;-)#postapo #hitec #critical #infrastructure #collapse #doomer musings.
(DIR) Post #ATdjvIUwutcfqoISDQ by remi@universeodon.com
2023-03-15T12:40:56Z
1 likes, 0 repeats
@8petros You've got the general idea. The development environment consists of (typically) a hardware definition language (#vhdl or #verilog), and a toolchain (usually vendor specific, though there are some open source ones) that synthesizes the code into a logic equation list. And finally a toolchain that takes that netlist and maps it into the individual small discrete functions and configures the routing between them.There is volatile memory on the device that stores the current configuration and must be loaded (many ways) on power up. There are a few models that have non-volatile memory that stores the configuration and loads itself. In high radiation environments, there are even fewer (and very expensive) models that have write-once-non-volatile memory for configuration.I don't think your idea of using them as a generic purpose drop in replacement for things is particularly a good idea. The board environment is honestly more important than what you're doing in logic.
(DIR) Post #ATdkQ2teyLtuROW6Lo by 8petros@petroskowo.pl
2023-03-15T12:47:13Z
0 likes, 0 repeats
Thank you for clarifying it for me. I am playing with the concept for now and - if I put my hands on an appropriate equipment - I may try to test the idea within some simple context. No pressure from me - ATM it is just something to give me a bit of mental distraction between one e-learning module I translate, and another. :-)
(DIR) Post #ATgQ2nsFFHmZZxNHcm by Tathar@furry.engineer
2023-03-16T19:42:17Z
0 likes, 0 repeats
@8petros The reason why #1 is resource intensive is because it relies on simulated annealing to determine which logic is performed on which logic blocks, to maximize performance within the design constraints. It's the same technique used in this video, for a different application. https://www.youtube.com/watch?v=Lq-Y7crQo44
(DIR) Post #ATgSHLTnIUFfle036W by Tathar@furry.engineer
2023-03-16T20:07:54Z
1 likes, 0 repeats
@8petros Importantly, all the resource-intensive part is happening in the toolchain's place-and-route algorithm. The HDL modules, and the conversion of them into gate-level logic, are cheap by comparison. From there, the toolchain has to fit that logic into whichever physical logic blocks are the best fit for timing, thermal, and pin constraints, by deciding where to place them on the die and how to route them together for a good-enough connection. Hence the name.
(DIR) Post #ATgTP4IFBAfCjgqMme by Tathar@furry.engineer
2023-03-16T20:19:35Z
1 likes, 0 repeats
@8petros As for #2, those models are just what you get when you apply the resource-intensive algorithm to just a part of the final design, and treat that "solved" part as one big component to be placed and routed as a monolith. Although it takes less time when that part of the work is already done for you, it's also possible for that approach to produce a less-optimal solution on the FPGA.
(DIR) Post #ATgU5B7CCqEk3WkuYq by Tathar@furry.engineer
2023-03-16T20:27:34Z
1 likes, 0 repeats
@8petros Also, if you're using the same design across multiple chips of the same FPGA model, you can flash all of them with the same bitstream file generated from #1. You only have to do the resource-intensive part once.
(DIR) Post #AU9SsOpGjCSr6QKOfo by polezaivsani@chaos.social
2023-03-30T20:01:16Z
0 likes, 0 repeats
@8petros Not an FPGA expert, though I'm thinking of dabbling with it. Your understanding seem to be on point, it's a programmable hardware. Unlike programmable microcontrollers where you ask the hardware to execute a custom program, with FPGAs you physically reconfigure the board to execute your program. Both cases run user programs, FPGA just does it orders of magnitude faster. And 1 order speed bump usually translates to some breakthrough difference down the line.
(DIR) Post #AU9T6osMfi8jwueUcK by polezaivsani@chaos.social
2023-03-30T20:03:54Z
0 likes, 0 repeats
@8petros And as mentioned previously, they are harder to use. There has been slow progress to making the ecosystem more approachable, mostly with open source hardware and free software, but it's still much bulkier than programming microcontrollers.