[HN Gopher] Imagination GPUs now support OpenGL 4.6
___________________________________________________________________
Imagination GPUs now support OpenGL 4.6
Author : mfilion
Score : 68 points
Date : 2023-07-06 18:59 UTC (4 hours ago)
(HTM) web link (blog.imaginationtech.com)
(TXT) w3m dump (blog.imaginationtech.com)
| andrewstuart wrote:
| What is this?
|
| This company makes GPUs?
| snvzz wrote:
| >This company makes GPUs?
|
| Yes, and they've been around for a while. If you're old enough,
| you might remember the name "PowerVR".
|
| Today, they mostly license GPU designs to SoC vendors. You'll
| find their designs in e.g. Android phones.
|
| Notably, JH7110 (RISC-V SoC used in VisionFive2 and Star64)
| uses one of their recent GPUs.
| vitaminka wrote:
| does this mean you can run, say, modern openCL code on smth
| like a VisionFive2?
| snvzz wrote:
| Oomph-wise, they claim a good 4x over what the Raspberry Pi
| 4/400 have. Note that rPi's is known to be anemic due to
| serious memory bandwidth bottlenecks, barely able to keep
| up with filling a 1080p screen.
|
| Currently they do not support the specific variant used in
| JH7110, but they seem to both be variants of the same
| architecture.
|
| JH7110 has the BXE-4-32MC1[0], whereas the driver currently
| supports the BXS-4-64-MC1[1].
|
| No idea about compute, and AIUI openCL suport in mesa3d is
| still a disaster for all drivers.
|
| 0. https://www.imaginationtech.com/product/img-
| bxe-4-32-mc1/
|
| 1. https://www.imaginationtech.com/product/img-
| bxs-4-64-mc1/
| vetinari wrote:
| Yes; and if you used an iPhone before 2017, you used GPU made
| by them.
| loufe wrote:
| I don't mean to detract from the technical merits of this at all,
| but for those not aware you might be interested to know that
| ImaginationTech is a Chinese-owned firm.
| vitaminka wrote:
| really no better than a us tech firm, unless you for some
| reason have a preference for nationality characteristics of
| your hardware backdoors lol
| speed_spread wrote:
| It's nice to see Chinese tech embrace existing standards.
|
| There are a lot of chips that are very poorly documented and
| supported by abysmal proprietary SDK. The software part of
| hardware is so often an afterthought yet it is also what makes
| or breaks the product.
| monocasa wrote:
| They're a British tech company, simply owned by a Chinese
| private equity fund.
|
| And they're sort of known for historically being very anti
| open source and their SDK difficult to continuously integrate
| into a larger product.
| mschuetz wrote:
| Glad to see more support for OpenGL, but I really hope we'll soon
| move to a compute-only way of handling graphics. The overhead of
| vulkan is absolutely insane (and not warranted, in my opinion),
| and OpenGL is on its last legs.
|
| Things like Nanite spark a little hope, since they've shown that
| software-rasterization via compute can be faster than the
| standard graphics pipeline with the hardware rasterizer for
| small, dense triangles. Seems like a matter of time until
| everything goes compute, even large triangles. Maybe the recent
| addition of work-graphs in DirectX is one step in that direction?
| verall wrote:
| > I really hope we'll soon move to a compute-only way of
| handling graphics
|
| Not happening anytime soon. The industry has lined up pretty
| solidly behind Khronos/Vulkan.
| DeathArrow wrote:
| >. The industry has lined up pretty solidly behind
| Khronos/Vulkan.
|
| What industry? Gaming industry is mostly using DirectX or
| Gnm/PSSL.
| mschuetz wrote:
| I know it won't happen overnight, but since it's already
| possible to do software rasterization for small triangles
| faster than hardware, having a graphics API framework starts
| losing its purpose. After all, we want to have the detail of
| small triangles anyway. Just let us draw to the screen in
| CUDA without the need for OpenGL/Vulkan interop, and I
| believe we'll soon see a shift to serious compute-based real-
| time rendering.
|
| Basically, instead of graphics being a framework, I want
| graphics to be a straightforward library you include and use
| in your CUDA/HIP/SYCL/OpenCL code.
| vitaminka wrote:
| > let us draw to the screen in CUDA without the need for
| OpenGL/Vulkan interop
|
| how would that work? like GPU frameworks would just be
| compute (like cuda) and some small component of it would
| just allow to write the end result to a buffer which would
| be displayed or smth?
| pjmlp wrote:
| The industry targeting Android and GNU/Linux devices, that
| is.
| ddingus wrote:
| >but I really hope we'll soon move to a compute-only way of
| handling graphics.
|
| Would you have the time to expand on this thought a bit? I am
| curious. Thanks!!
| [deleted]
| littlestymaar wrote:
| > The overhead of vulkan is absolutely insane (and not
| warranted, in my opinion)
|
| Would you mind expanding on this?
| mschuetz wrote:
| I meant the development/learning overhead. With Vulkan you
| can do incredible low-level optimizations to squeeze every
| last bit of performance out of your 3D application, but
| because you are basically mandated to do it that way, you
| have a very harsh learning curve and need lots of code for
| the simplest tasks. I'd rather prefer approaches that make
| the common things that everyone wants to do easy (draw your
| first simple scenes), and then optionally gives you all the
| features to sqeeze out performance where you really need to.
| Because at least for me, I don't work with massive scenes
| with millions of instances of thousands of different objects.
| I do real-time graphics research, mostly with compute, and
| I'd just like to present the triangles I've created or the
| framebuffers I created via compute (like shadertoy).
|
| I'll readily admit, I'm neither smart nor patient enough for
| Vulkan so I quickly gave up and learned CUDA instead, because
| it was way easier to write a naive software-rasterizer for
| triangles in CUDA, than it was to combine a compute shader
| and a vertex+fragment shader in Vulkan. I'm just rendering a
| single buffer with ~100k compute-generated triangles, and
| learning Vulkan for that just wasn't worth it.
| DeathArrow wrote:
| Well, then there's OpenGL and DirectX.
| mschuetz wrote:
| Yeah, but OpenGL doesn't get updates anymore. My timeline
| goes like: I needed pointers(and pointer casting) for my
| compute shaders so I checked the corresponding GLSL
| extension, which was only available in Vulkan so I tried
| switching from OpenGL to Vulkan. After a week I gave up -
| the pointer/Buffer reference extension did not look
| promising anyway - and I tried out CUDA instead. That's
| when I found out that CUDA is the greatest shit ever.
| That's what I want graphics programming to be like. Since
| then I just render all the triangles and lines in CUDA,
| because it easily handles hundreds of thousands of them
| in real-time with a naive, unoptimized software-
| rasterizer, and that's all I need. In addition to the
| billion points you can also render in real-time in CUDA
| with atomics.
| cesarb wrote:
| > That's when I found out that CUDA is the greatest shit
| ever. That's what I want graphics programming to be like.
|
| Something which requires you to buy new hardware from a
| specific brand, and load an out-of-tree binary-only
| module on your kernel? That's not what I want graphics
| programming to be like.
|
| The Vulkan API might be clunkier (I don't know, I haven't
| looked at the CUDA API, since I don't have the required
| hardware), but at least it can work everywhere.
| bogwog wrote:
| > The overhead of vulkan is absolutely insane
|
| Overstatement of the year award candidate.
|
| Vulkan and OpenGL both already support mesh shaders, which is a
| compute-oriented alternative to the traditional rasterization
| pipeline.
| rsp1984 wrote:
| > Overstatement of the year award candidate.
|
| I think he meant development overhead, not performance
| overhead.
| mschuetz wrote:
| Yes, sorry for the confusion. I'd just rather have an API
| where the common things are easy, and the super powerful
| low-level optimizations are optional.
| sounds wrote:
| There were several pushes to produce a "sane defaults"
| library that made vulkan a lot less verbose.
|
| OpenGL is exactly what you said, "an API where the common
| things are easy, and the super powerful low-level
| optimizations are optional."
| mschuetz wrote:
| > Vulkan and OpenGL both already support mesh shaders, which
| is a compute-oriented alternative to the traditional
| rasterization pipeline.
|
| Mesh shaders are a step in the right direction, but they are
| still embedded in all that unnecessary Vulkan fluff. I would
| want these things in CUDA because it does the opposite
| approach of Vulkan - it makes the common things easy, and the
| hard/powerful things optional. Just let me draw things
| directly in CUDA, and maybe give access to the hardware
| rasterizer via a CUDA call.
| kevingadd wrote:
| Isn't nanite hardware rasterization with software shading? Like
| using the GPU to draw triangle/cluster ID #s into the FB and
| then shading those?
| zbendefy wrote:
| It uses the rasterization HW for large triangles and uses a
| software rasterizer in compute for small triangles.
|
| The HW rasterizer is not that efficient if the triangles are
| tiny, which is the case for nanite.
| baybal2 wrote:
| But that is one generation of GPUs away when they will make
| some computational shortcut for small triangles, which to
| me seems to be rather trivial to implement.
| Jasper_ wrote:
| For small triangles, they use a software rasterizer into the
| V-buffer. Obviously for 1px triangles since they don't want
| to waste quad overdraw, but I think they found it's faster up
| to 12px/tri or so on AMD.
| mschuetz wrote:
| Part of Nanite is software rasterization by rendering
| triangles with 64 bit atomics. You can simply draw the
| closest fragment of a triangle to screen via
| atomicMin(framebuffer[pixelID], (depth << 32) |
| triangleData).
| bangonkeyboard wrote:
| I had to double check that OpenGL 4.6 (released in 2017) really
| still is the latest version.
| pjmlp wrote:
| It is, and there is no red book edition covering it.
| shmerl wrote:
| Vulkan was initially called OpenGL-next.
| shmerl wrote:
| Good to see Zink now provides OpenGL over Vulkan for smaller GPU
| makers. I wonder if big ones will also eventually use it for it.
| snvzz wrote:
| I suspect that, by now, the only reason the other mesa3d opengl
| drivers exist is historical.
|
| If these vendors came up with different enough hardware to need
| a new driver from scratch, they'd just focus on Vulkan and use
| Zink for opengl.
|
| After all, opengl is a relatively high level API. It is
| sensible to implement it in a hardware-independent manner on
| top of Vulkan.
| clhodapp wrote:
| The big players will probably switch once we get to the point
| that nothing still running on OpenGL needs optimally efficient
| usage of the GPU (either because it's old or because it was
| never performance-critical to start with, such as a student
| project).
| snvzz wrote:
| I am hopeful this will help mesa3d support with RISC-V SoCs such
| as JH7110 (and thus VisionFive2 and Star64) be excellent.
| rjsw wrote:
| There are links from the blog post to the Mesa sources and to a
| Linux tree containing their DRM kernel driver.
| snvzz wrote:
| I am aware.
|
| Currently they do not support the specific variant used in
| JH7110, but they seem to both be variants of the same
| architecture.
|
| JH7110 has the BXE-4-32MC1[0], whereas the driver currently
| supports the BXS-4-64-MC1[1].
|
| 0. https://www.imaginationtech.com/product/img-bxe-4-32-mc1/
|
| 1. https://www.imaginationtech.com/product/img-bxs-4-64-mc1/
___________________________________________________________________
(page generated 2023-07-06 23:00 UTC)