https://hackaday.com/2021/11/13/opengl-machine-learning-runs-on-low-end-hardware/ Skip to content Logo Hackaday Primary Menu * Home * Blog * Hackaday.io * Tindie * Hackaday Prize * Submit * About * Search for: [ ] [Search] November 16, 2021 OpenGL Machine Learning Runs On Low-End Hardware 17 Comments * by: Tom Nardi November 13, 2021 * * * * * Title: [OpenGL Machine Learn] Copy Short Link: [https://hackaday.com] Copy [pizero] If you've looked into GPU-accelerated machine learning projects, you're certainly familiar with NVIDIA's CUDA architecture. It also follows that you've checked the prices online, and know how expensive it can be to get a high-performance video card that supports this particular brand of parallel programming. But what if you could run machine learning tasks on a GPU using nothing more exotic than OpenGL? That's what [lnstadrum] has been working on for some time now, as it would allow devices as meager as the original Raspberry Pi Zero to run tasks like image classification far faster than they could using their CPU alone. The trick is to break down your computational task into something that can be performed using OpenGL shaders, which are generally meant to push video game graphics. [openglml_detail2]An example of X2's neural net upscaling. [lnstadrum] explains that OpenGL releases from the last decade or so actually include so-called compute shaders specifically for running arbitrary code. But unfortunately that's not an option on boards like the Pi Zero, which only meets the OpenGL for Embedded Systems (GLES) 2.0 standard from 2007. Constructing the neural net in such a way that it would be compatible with these more constrained platforms was much more difficult, but the end result has far more interesting applications to show for it. During tests, both the Raspberry Pi Zero and several older Android smartphones were able to run a pre-trained image classification model at a respectable rate. This isn't just some thought experiment, [lnstadrum] has released an image processing framework called Beatmup using these concepts that you can play around with right now. The C++ library has Java and Python bindings, and according to the documentation, should run on pretty much anything. Included in the framework is a simple tool called X2 which can perform AI image upscaling on everything from your laptop's integrated video card to the Raspberry Pi; making it a great way to check out this fascinating application of machine learning. Truth be told, we're a bit behind the ball on this one, as Beatmup made its first public release back in April of this year. It might have flown under the radar until now, but we think there's a lot of potential for this project, and hope to see more of it once word gets out about the impressive results it can wring out of even the lowliest hardware. [Thanks to Ishan for the tip.] * [share_face] * [share_twit] * [share_in] * [share_mail] Posted in Software HacksTagged machine learning, neural network, opengl, opengl es, shader Post navigation - A One-Servo Mechanical Seven-Segment Display Improving A Mini-Lathe With A Few Clever Hacks - 17 thoughts on "OpenGL Machine Learning Runs On Low-End Hardware" 1. M says: November 13, 2021 at 2:02 pm Pytorch, one of the two most popular machine learning toolkits, is slowly picking up support for running on top of the vulkan graphics API. The support is intended for running machine learning models on android smartphones, but since the Pi now has a vulkan driver I don't see why it wouldn't eventually work there too. It would be amazing if this eventually matured, as vulkan is quickly becoming ubiquitous. When every GPU driver eventually supports it, no special or unusual hardware will be required any longer. Report comment Reply 1. MrSVCD says: November 13, 2021 at 3:04 pm Is vulkan support planned for more than PI4, PI400, CM4? Report comment Reply 1. M says: November 13, 2021 at 11:48 pm Vulkan support in pytorch works on any GPU with a vulkan driver (though isn't necessarily bug-free). Not just smartphones. Not just the pi. Not just AMD, or nvidia, or intel. Report comment Reply 1. N Stoker says: November 14, 2021 at 2:53 pm I could be wrong but I think the point being made was that on the Pi, Vulkan is currently only on Pi4 Report comment Reply 2. M says: November 13, 2021 at 3:50 pm The difference has little to do with cuda vs opengl (they're nothing but software interfaces) and a lot more to do with how powerful the GPU itself is. Even if you implemented a cuda driver for the pi, it's still one of the weakest GPUs on earth.d Report comment Reply 1. M says: November 13, 2021 at 11:49 pm Hmm. comment I replied to disappeared. Report comment Reply 1. John says: November 14, 2021 at 5:54 pm It got deleted for some reason. I'm as confused by it as you are. I didn't mean to say that it was a CUDA vs OpenGL difference either. I meant that the article here is a bit confusing with the use of "GPU" in reference to both the CUDA GPUs and the builtin Pi GPU and referred to them both simply as "GPU". Yes, they're both GPUs, but not quite the same thing, and the change in topic lead me to believe they were connecting PCI-e cards to Pis. It was an assumption on my part perhaps, but I didn't get there on my own. Do authors delete comments they don't like on their own posts, or is that a task reserved by a specific editor? What on earth is going on over at Hackaday? Report comment Reply 1. M says: November 14, 2021 at 6:46 pm The two GPUs are mostly the same, though there do tend to be some architectural differences between the style used for mobile GPUs and the style for desktop GPUs. Interestingly, the architecture of the M1 is closer to a desktop GPU than a mobile one. https:// rosenzweig.io/blog/asahi-gpu-part-3.html I suppose on the Pi 4 you could attach a large CUDA-capable GPU (and many people have tried). I'm not 100% sure if anyone has gotten past the BAR address space mapping issue yet? Maybe with a modified kernel/device tree? Report comment Reply 2. Confused says: November 15, 2021 at 1:24 pm If comments aren't constructive or useful, they will get deleted. What sounds like a comment about you not comprehending the meaning of the post certainly doesn't sound worth keeping to me. Report comment Reply 3. Truth says: November 13, 2021 at 6:04 pm If I need to scale something tiny to be gigantic I usually run it through hqx4 a few times (ref: https://en.wikipedia.org/wiki/Hqx ) It does not use AI, but does an insanely impressive job. Report comment Reply 1. M says: November 14, 2021 at 12:20 am Nice! I'll have to remember that one for later! Report comment Reply 4. Somun says: November 13, 2021 at 6:45 pm It is unfortunate that these AI frameworks were based on a proprietary platform (Cuda) rather than OpenCL. Report comment Reply 1. Christian Knopp says: November 13, 2021 at 10:10 pm Research grants, no doubt... Report comment Reply 1. M says: November 13, 2021 at 11:58 pm CUDA has been king far, far longer than any of these AI frameworks have existed. Neural network techniques only date back to about 2010. As for how CUDA beat out openCL to become the defacto standard, that's another story. Report comment Reply 5. Christian Knopp says: November 13, 2021 at 10:09 pm New Something Pi's with 0.01 cent used Intel integrated gpu chips incoming... Just the socket! Report comment Reply 6. Unfocused says: November 13, 2021 at 11:07 pm I have a Google Coral USB Accelerator sitting on my desk waiting to be played with, which is another cheap & accessible avenue. https://coral.ai/products/accelerator/ Report comment Reply 7. John says: November 14, 2021 at 5:47 pm Whoever is in charge of deleting comments is getting overly sensitive. Report comment Reply Leave a Reply Cancel reply Please be kind and respectful to help make the comments section excellent. (Comment Policy) This site uses Akismet to reduce spam. Learn how your comment data is processed. Search Search for: [ ] [Search] Never miss a hack Follow on facebook Follow on twitter Follow on youtube Follow on rss Contact us Subscribe [ ] [ ] [Subscribe] If you missed it * [1280px-Mic] Microplastics Are Everywhere: Land, Sea And Air 22 Comments * [Eyeglasses] Tech In Plain Sight: Eyeglasses 27 Comments * [milling-th] Mining And Refining: Pure Silicon And The Incredible Effort It Takes To Get There 16 Comments * Quantum computer Scientific Honesty And Quantum Computing's Latest Theoretical Hurdle 19 Comments * [SulfurHexa] Sulfur Hexafluoride: The Nightmare Greenhouse Gas That's Just Too Useful To Stop Using 73 Comments More from this category Our Columns * [links-thum] Hackaday Links: November 14, 2021 19 Comments * [2019-Hacka] Peek Behind The Curtains: Conference Badge Design 2 Comments * [microphone] Hackaday Podcast 144: Jigs Jigs Jigs, Faberge Mic, Paranomal Electronics, And A 60-Tube Nixie Clock 2 Comments * [darkarts-t] This Week In Security: Unicode Strikes, NPM Again, And First Steps To PS5 Crack 31 Comments * [nfc-antenn] NFC Performance: It's All In The Antenna 14 Comments More from this category Search Search for: [ ] [Search] Never miss a hack Follow on facebook Follow on twitter Follow on youtube Follow on rss Contact us Subscribe [ ] [ ] [Subscribe] If you missed it * [1280px-Mic] Microplastics Are Everywhere: Land, Sea And Air 22 Comments * [Eyeglasses] Tech In Plain Sight: Eyeglasses 27 Comments * [milling-th] Mining And Refining: Pure Silicon And The Incredible Effort It Takes To Get There 16 Comments * Quantum computer Scientific Honesty And Quantum Computing's Latest Theoretical Hurdle 19 Comments * [SulfurHexa] Sulfur Hexafluoride: The Nightmare Greenhouse Gas That's Just Too Useful To Stop Using 73 Comments More from this category Categories Categories[Select Category ] Our Columns * [links-thum] Hackaday Links: November 14, 2021 19 Comments * [2019-Hacka] Peek Behind The Curtains: Conference Badge Design 2 Comments * [microphone] Hackaday Podcast 144: Jigs Jigs Jigs, Faberge Mic, Paranomal Electronics, And A 60-Tube Nixie Clock 2 Comments * [darkarts-t] This Week In Security: Unicode Strikes, NPM Again, And First Steps To PS5 Crack 31 Comments * [nfc-antenn] NFC Performance: It's All In The Antenna 14 Comments More from this category Recent comments * Al Williams on Tech In Plain Sight: Eyeglasses * Al Williams on Tech In Plain Sight: Eyeglasses * CampGareth on For All Their Expense, Electric Cars Are Still The Cheapest * Sword on Binaural Hearing Modeled With An Arduino * genixia on The Spiced (Cider) Must Flow * Pat on For All Their Expense, Electric Cars Are Still The Cheapest * Alan on Pluto Spectrum Analyzer Uses Command Line * Elliot Williams on Microplastics Are Everywhere: Land, Sea And Air * Pat on For All Their Expense, Electric Cars Are Still The Cheapest * Foldi-One on For All Their Expense, Electric Cars Are Still The Cheapest Now on Hackaday.io * hbwh19 liked Building a Mechanical(Geared) Press (3d printed). * doodlewhale liked Morph-S2BkDongle. * doodlewhale liked ARMAWATCH & ARMACHAT - long range radio messengers. * doodlewhale liked ARMACHAT - Doomsday wireless QWERTY communicator. * Carlos Noguera liked Raspberry Pi Wireless Print Server. * Federico Virdia has updated the project titled Sieg SX2.7L Mill - 4 axis CNC. * mitchell.dokken liked Jigglypuff IoT Carbon Dioxide and Dust Monitor. * Hexastorm liked Soil moisture measurement device. * Sean Blakley has updated components for the project titled SQRL Quickstart. * Sean Blakley has added details to SQRL Quickstart. Logo * Home * Blog * Hackaday.io * Tindie * Hackaday Prize * Video * Submit A Tip * About * Contact Us Never miss a hack Follow on facebook Follow on twitter Follow on youtube Follow on rss Contact us Subscribe to Newsletter [ ] [ ] [Subscribe] Copyright (c) 2021 | Hackaday, Hack A Day, and the Skull and Wrenches Logo are Trademarks of Hackaday.com | Privacy Policy | Terms of Service Powered by WordPress VIP [impression] [close] By using our website and services, you expressly agree to the placement of our performance, functionality and advertising cookies. Learn more OK Loading Comments... Write a Comment... [ ] Email (Required) [ ] Name (Required) [ ] Website [ ] [Post Comment]