[HN Gopher] A PCIe Coral TPU Finally Works on Raspberry Pi 5
___________________________________________________________________
A PCIe Coral TPU Finally Works on Raspberry Pi 5
Author : mikece
Score : 52 points
Date : 2023-11-17 19:16 UTC (3 hours ago)
(HTM) web link (www.jeffgeerling.com)
(TXT) w3m dump (www.jeffgeerling.com)
| jauntywundrkind wrote:
| Coral is 4 years old, and it's both shocking & not that there
| aren't many competitors out there today.
|
| Also a bit sad PyCoral requires python 3.9! Yikes!
| filterfiber wrote:
| Especially since it only has like 8 MB of memory I think?
| geerlingguy wrote:
| Some SoCs even have competent built in NPUs, too, but
| software support is severely lacking.
|
| PyCoral and Coral hardware development seems glacial lately
| :(
|
| They have enjoyed a lot of momentum... I wish they could
| release a follow-up version and get some longstanding
| software issues resolved.
| gymbeaux wrote:
| I think companies prefer to develop AI stuff and then sell it,
| rather than sell the hardware.
| nmstoker wrote:
| However keeping the Python version up to date shouldn't be
| that hard though, should it?
| westurner wrote:
| An HBM3E HAT would or would not yet make TPUs more useful with a
| Raspberry Pi 5?
|
| Jetson Nano (~$149)
|
| Orin Nano (~$499, 32 tensor cores, 40 TOPS)
|
| AGX Orin (200-275 TOPS)
|
| NVIDIA Jetson > Origins:
| https://en.wikipedia.org/wiki/Nvidia_Jetson#Versions
|
| TOPS for NVIDIA [Orin] Nano [AGX]
| https://connecttech.com/jetson/jetson-module-comparison/
|
| Coral Mini-PCIe ($25; ? tensor cores, 4 TOPS (int8); 2 TOPS per
| watt)
|
| TPUv5 (393 TOPS)
|
| Tensor Processing Unit (TPU)
| https://en.wikipedia.org/wiki/Tensor_Processing_Unit
|
| AI Accelerator > Nomenclature:
| https://en.wikipedia.org/wiki/AI_accelerator
|
| NVIDIA DLSS > Architecture:
| https://en.wikipedia.org/wiki/Deep_learning_super_sampling#A... :
|
| > _DLSS is only available on GeForce RTX 20, GeForce RTX 30,
| GeForce RTX 40, and Quadro RTX series of video cards, using
| dedicated AI accelerators called Tensor Cores. [23][28] Tensor
| Cores are available since the Nvidia Volta GPU microarchitecture,
| which was first used on the_ Tesla _V100 line of products.[29]
| They are used for doing fused multiply-add (FMA) operations that
| are used extensively in neural network calculations for applying
| a large series of multiplications on weights, followed by the
| addition of a bias. Tensor cores can operate on FP16, INT8, INT4,
| and INT1 data types._
|
| Vision processing unit:
| https://en.wikipedia.org/wiki/Vision_processing_unit
|
| Versatile Processor Unit (VPU)
___________________________________________________________________
(page generated 2023-11-17 23:00 UTC)