[HN Gopher] Ask HN: Will there ever be a vendor agnostic GPU int...
       ___________________________________________________________________
        
       Ask HN: Will there ever be a vendor agnostic GPU interface?
        
       We moved from almost write once run anywhere OpenGL to vendor-
       dependent Vulkan / D3D12 / Metal / WebGPU, which is kinda
       impossible to support all without heavy abstraction layers and
       shader transpilers. Is there any possibility that we'll have easy
       cross platform in graphics realm in the future?
        
       Author : slmjkdbtl
       Score  : 61 points
       Date   : 2021-10-31 17:36 UTC (5 hours ago)
        
       | [deleted]
        
       | sebow wrote:
       | Vulkan is the best choice right now, but "money talks" when it
       | comes to what framework developers "should" use.
        
       | pjmlp wrote:
       | OpenGL so far has hardly been a thing in game consoles anyway,
       | and on Windows most vendors had crappy drivers versus their
       | DirectX ones.
       | 
       | Also contrary to common beliefs, OpenGL is "portable" not
       | portable, as the spaghetti code of vendor extensions, GPU driver
       | workarounds, differences between GL and GL ES, shading language
       | compilers, lead to multiple common paths hardly different from
       | using multiple 3D APIs.
        
       | tester756 wrote:
       | Is there something like LLVM IR for GPUs?
        
         | pcwalton wrote:
         | Yes, that's SPIR-V. Unfortunately it's not universally
         | supported, but Khronos maintains transpilers to HLSL and Metal
         | Shading Language.
        
       | [deleted]
        
       | modeless wrote:
       | WebGPU _is_ the easy cross platform graphics you 're requesting.
       | It's vendor agnostic and not limited to the web. You can use
       | WebGPU from native code and you won't need your own shader
       | transpilers or abstraction layers. You will (soon) just work on
       | any platform.
        
         | ajvs wrote:
         | The web is not vendor-agnostic. It's mostly run by Google
         | (Chromium-based browsers), with Firefox and Safari only owning
         | a fraction of the marketshare.
        
           | modeless wrote:
           | WebGPU is not the web, and the standards committee does not
           | make decisions according to market share. In any case the
           | relevant vendors here are GPU vendors, not browser vendors.
           | (Only one browser vendor happens to also be a GPU vendor.)
        
       | Jugurtha wrote:
       | Something like this: https://github.com/plaidml/plaidml ?
        
       | slimsag wrote:
       | In my view, WebGPU is the future of cross-platform graphics. I
       | wrote about this and why we chose it for a game engine we're
       | developing in Zig[0]
       | 
       | It's the one API that Mozilla, Google, Microsoft, Apple, and
       | Intel can all agree on. Nvidia and AMD are silent, but it's on
       | top of Metal/DirectX/Vulkan behind the scenes so they don't
       | really need to care.
       | 
       | Today, Vulkan with MoltenVK (for Apple devices) has more
       | _features_, but even that is not super clear-cut. For example,
       | mesh shaders are supported on AMD cards in DirectX 12 but not
       | Vulkan.
       | 
       | It's not just Apple who continues to push their own API, either.
       | DirectX 13 is likely coming soon, and from what I understand
       | Vulkan gets "ported" features after they land in DirectX.
       | 
       | If you want bleeding edge features, you're going to need to use
       | the native API: Vulkan on Linux, DirectX on Windows, and Metal on
       | Apple.
       | 
       | But if you just want the common denominator? WebGPU native isn't
       | a bad choice IMO. And it being a cleaner API than Vulkan may mean
       | you need less abstraction on top to work with it.
       | 
       | [0] https://devlog.hexops.com/2021/mach-engine-the-future-of-
       | gra...
        
         | Jayschwa wrote:
         | > In my view, WebGPU is the future of cross-platform graphics.
         | 
         | How long before this API is ubiquitous though? For maximum
         | compatibility today, wouldn't something like OpenGL ES / WebGL
         | make the most sense?
        
           | slimsag wrote:
           | Google's native implementation of WebGPU, Dawn, has
           | DirectX/Metal/Vulkan backends, as well as fallback to OpenGL
           | when not available. So it's already ubiquitous today, you
           | just can't use it in the browser (yet)
        
           | modeless wrote:
           | Yes, if you need a cross platform graphics API with maximum
           | compatibility today, an OpenGL ES 2.0 context provided by
           | ANGLE is what you should use (or ES 3.0 if dropping support
           | for very old hardware is OK). Not only does it have very wide
           | platform support, including platforms that don't natively
           | support OpenGL ES, but it will _behave_ the same everywhere,
           | which is something even OpenGL has always struggled with when
           | using platform /driver implementations instead of ANGLE
           | (check out Chromium's list of driver bug workarounds: https:/
           | /chromium.googlesource.com/chromium/src/+/HEAD/gpu/co...).
           | 
           | Eventually WebGPU will get there too, but it will take time.
        
         | pjmlp wrote:
         | Sony and Nintendo don't care about WebGPU.
        
           | slimsag wrote:
           | Right. Nintendo cares about NVN[0], the lower level API for
           | the Switch (not Vulkan, which they added much later as an
           | option)
           | 
           | Sony cares about GNM and their custom shader language
           | PSSL[1]. Seems unlikely they'll support Vulkan anytime soon
           | either.
           | 
           | Arguably, it is more likely WebGPU will continue to have a
           | reasonable translation to these platform's native APIs
           | compared to Vulkan, given that WebGPU is the common
           | denominator between Vulkan/Metal/DirectX.
           | 
           | That's not to speak of the market pressures having a large
           | swath of browser-based games written with WebGPU would have
           | in encouraging Nintendo/Sony to care about these.
           | 
           | [0] https://blogs.nvidia.com/blog/2016/10/20/nintendo-switch/
           | 
           | [1] https://en.wikipedia.org/wiki/PlayStation_4_system_softwa
           | re#...
        
       | pcwalton wrote:
       | There's _mostly_ not a reason why we couldn 't have a single API
       | other than politics. My opinionated takes (opinions mine and not
       | those of my employer, etc.):
       | 
       | Vulkan: Standardized by a standards body (Khronos). In an ideal
       | world we would all be using this.
       | 
       | D3D12: Microsoft likes Direct3D and has the market power to be
       | able to push its own API. However, the situation on Windows isn't
       | as bad as on macOS/iOS because drivers generally support Vulkan
       | as well.
       | 
       | Metal: This mostly exists because Apple refuses to support
       | anything Khronos related, for not-particularly-compelling
       | reasons. It does have the excuse that it was being developed
       | before Vulkan, but in 2021 its existence is not particularly
       | well-motivated.
       | 
       | WebGPU: As an API, WebGPU in some form has to exist because
       | Vulkan is not safe (in the memory-safe sense). For untrusted
       | content on the Web, you need a somewhat higher-level API than
       | Vulkan that automatically inserts barriers, more like Metal
       | (though Metal is not particularly principled about this and is
       | unsafe in some ways as well). That being said, Web Shading
       | Language is just Apple playing politics again regarding Khronos,
       | as the rest of the committee wanted to use SPIR-V. There's no
       | compelling reason why Web Shading Language needs to exist.
        
         | pjmlp wrote:
         | You missed LibGNM and NVN.
         | 
         | DirectX and Metal are great with high level bindings, and much
         | more productive shading languages, instead of "C way" that
         | Khronos keeps pushing for.
        
           | roblabla wrote:
           | Khronos is pushing for SPIR-V, which isn't a language but a
           | bytecode. There are a lot of good reasons to do this: It
           | allows innovation for shading languages to take place as
           | anyone can build their own language and compile it down to
           | SPIR-V, and it's much easier to parse than, say, GLSL, making
           | it easier the proprietary drivers aren't doing something
           | stupid.
           | 
           | The DirectX shading language (HLSL) and the metal shading
           | language (MSL) could both compile down to SPIR-V, other than
           | the occasional missing feature that's not standardized yet.
        
         | enos_feedler wrote:
         | Politics only plays such a strong role because at the end of
         | the day having a single API doesn't solve a real problem. It's
         | desirable because it reduces the surface area of new things
         | developers have to learn if they want to target all these
         | systems. However, mostly different people work on different
         | targets and it really just helps with learning. In terms of
         | actual building, deploying etc it's probably best to have
         | proprietary things that wrap themselves cleanly around the
         | systems they are operate in.
        
           | pcwalton wrote:
           | Strong disagree as someone who spent years on a cross-
           | platform accelerated vector graphics library in industry.
           | Dealing with API differences and differences in tooling
           | (Metal's tooling is far behind RenderDoc for example) took an
           | enormous amount of my time, time that could otherwise have
           | been spent fixing bugs and adding features.
        
             | enos_feedler wrote:
             | I don't disagree you had these problems. I'm just saying
             | the market of developers in your pool is not very big. Most
             | people are happy just learning a single platform, chip, etc
             | and getting a job writing code against it. I get the allure
             | of "if only every target spoke the language I Know.. then I
             | could target all the things.." but in practice it's not
             | really a problem
        
               | ncmncm wrote:
               | The sheer quantity of things _not even started_ because
               | of the excess cost of targeting everything absolutely
               | dwarfs the sum total of what is now built for _this_ or
               | _that_ target. We have seen this dynamic play out over
               | and over again, in the past six+ decades, going all the
               | way back to FORTRAN.
               | 
               | A standard target would multiply the size of this
               | "market" many times over, generating additional demand
               | for implementations throughout multiple industries.
               | Balkanization has always been a debilitating tax on
               | everyone, not excepting "market leaders" scrapping over
               | the artificially reduced pie. Everyone made _way_ more
               | money after FORTRAN than before.
        
               | enos_feedler wrote:
               | if the cost is attributed to "targeting everything" as
               | you say, then wouldn't those things just get started by
               | targeting one thing? Neither you or I can prove there are
               | things not invented because of this, so nothing to debate
               | here. The reason why pcwalton spent lots of effort
               | jumping through hoops building a vector graphics library
               | that targets lots of platforms is because we already know
               | that vector graphics is something we had on a single
               | platform. It is really hard for to imagine something not
               | being invented/created at all because there isn't a
               | single language that can express its computation
               | everywhere simultaneously.
        
               | ncmncm wrote:
               | That argument has been repeated over and over again, over
               | decades, and is proven wildly off the mark the moment
               | standardization enables the test, every time.
               | 
               | Strangely, the fact _never_ prevents it being trotted out
               | again, every time.
        
               | enos_feedler wrote:
               | The only time standardization unlocks innovation is when
               | agreement is core to the function of the system (TCP/IP,
               | communication protocols, etc). If it's simply to create a
               | common abstraction across similar things, then it won't
               | unlock anything. It's simply a nice to have.
        
               | smoldesu wrote:
               | Why doesn't Apple consider their GPU to be a core
               | function of their system, then?
        
               | ncmncm wrote:
               | Strangely, the fact _still_ never prevents it being
               | trotted out again, every time.
        
         | echelon wrote:
         | Apple doesn't want the web to be a viable alternative to the
         | app store. They'll continue to choose strategies that hold it
         | back.
        
           | slimsag wrote:
           | It's not so clear-cut, they're pouring a lot of resources
           | into WebGPU and Safari.
        
             | astlouis44 wrote:
             | This, they literally posted a TON of engineering WebGPU
             | jobs just recently for their WebKit team.
        
           | pcwalton wrote:
           | For what it's worth, I do not believe that this is Apple's
           | strategy. Squabbles with Khronos suffice to explain
           | everything.
        
           | mensetmanusman wrote:
           | This is true, there is so much more that Safari could be
           | capable of if it gave developers lower level hardware
           | capabilities, but that would hurt the bottom line...
        
         | amelius wrote:
         | > There's mostly not a reason why we couldn't have a single API
         | other than politics.
         | 
         | If you follow Apple's logic, this is totally wrong. Everything
         | has to be integrated in order to achieve the best possible
         | experience, and public APIs would be a bottleneck that make
         | things inflexible for the company that tries to exploit the
         | technology.
        
         | slimsag wrote:
         | I agree with you about Web Shading Language not needing to
         | exist.
         | 
         | But I also don't think it's the end of the world, either.
         | Google's Tint can translate between SPIR-V/GLSL/etc -> WGSL and
         | vice-versa pretty well.
         | 
         | If that's what it takes for Apple to be on board with WebGPU
         | and we get an extension later to load SPIR-V (or some other
         | WGSL binary form) later for optimization I'm cool with that.
        
           | pcwalton wrote:
           | Yeah, the silver lining is that the amount of effort folks
           | have gone to in order to ensure some level of
           | interoperability has been amazing. Khronos developers working
           | on SPIRV-Cross, wgpu-rs folks like kvark, MoltenVK people,
           | Wine devs, etc. are unsung heroes of the graphics industry
           | and deserve tons of support.
        
       | PeterisP wrote:
       | IMHO the requirements for that to happen would be a major
       | slowdown in the "GPU race" where consumers consider that they
       | have "enough" power and features, companies are not able to
       | convince them otherwise and get pushed to commoditize and
       | standartize the products as every product "ticks all the
       | checkboxes", and one of those checkboxes happens to be good
       | standards support. Alternatively, extreme fragmentation might
       | force that, it happened decades ago when there very many video
       | card manufacturers, but it's impossible now given the huge
       | barriers of entry for competitive GPUs.
       | 
       | But while companies are able to differentiate their products
       | feature wise (or even if they can't, but strongly believe they
       | can), the incentives are set against it, so it's not likely to
       | happen, the GPU vendors will push for vendor-specific aspects.
        
         | KennyBlanken wrote:
         | I'd say raytracing was NVIDIA trying to distract everyone from
         | the fact that their GPUs had in fact been 'enough' for a while.
         | "RTX ON" was widely meme'd when the first RTX cards came out
         | (edit: partly because at the time game publishers weren't
         | really making good RTX games. Raytracing is mostly pointless if
         | the game publisher doesn't bother to put a lot of effort into
         | scene design, materials, and lighting. Control is a great
         | example of a game that is gorgeous even without raytracing, and
         | reportedly looks even better with raytracing turned on.)
         | 
         | I don't think the current state could be described as a "race".
         | 
         | NVIDIA makes good stuff and great drivers but charges a fortune
         | (+$1500 for a high end GPU?!) AMD has made space heaters that
         | happen to have video-out ports and their cards are useless for
         | compute work because so many projects support CUDA and nothing
         | else. Their current lineup cheats benchmarks by overclocking
         | the cards and thermally throttling shortly after the time it
         | takes to run a typical benchmark.
         | 
         | Edit: since I'm now being 'corrected': look at the TDP for
         | same-generation, same-product-tier NVIDIA and AMD cards. AMD's
         | cards have always used more power for similar performance.
         | 
         | People think certain GPUs "run hot" because
         | 
         | - they buy the cheapest GPU they can which means the heatsink
         | and fans are terrible
         | 
         | - they don't adjust the stock fan curve (most GPUs these days
         | don't even turn on the fans until they hit 60c and only max out
         | the fans when the GPU die temperature is very hot because
         | people care mostly about noise
         | 
         | - they don't have a well-ventilated case (note I said well-
         | ventilated, not "stuffed with fans)
         | 
         | Buy a properly ventilated case, a mid-or-high-tier card from a
         | reputable manufacturer, and adjust the fan curve. Find the
         | temperature the card sits at during light desktop use and set
         | the fans to kick in slightly above that. Then force the fans to
         | run at full speed and benchmark, seeing what the die
         | temperature maxxes out at. Set the fan curve to 100% at
         | slightly below that temp.
        
           | zokier wrote:
           | > Edit: since I'm now being 'corrected': look at the TDP for
           | same-generation, same-product-tier NVIDIA and AMD cards.
           | AMD's cards have always used more power for similar
           | performance
           | 
           | Benchmarks disagree with that assertion:
           | 
           | https://www.techpowerup.com/review/nvidia-geforce-
           | rtx-3080-t...
           | 
           | RTX 3000 series has consistently worse perf/watt than their
           | AMDs RX 6000 series peers
        
             | buildbot wrote:
             | Yeah, isn't AMD on TSMC 7nm vs. Nvidia with Samsung 8nm?
             | Samsung has much higher power usage than TSMC from what
             | I've heard.
        
               | dathinab wrote:
               | Comparing the nm between different fab factories is not
               | very useful and can be completely non-descriptive. (Not
               | only mean different companies different things when they
               | say Xnm, the things they mean are also often not at all
               | describing the performance/characteristics of it well.)
               | 
               | > Samsung has much higher power usage than TSMC from what
               | I've heard.
               | 
               | Is possible, but weather or not Nvideas bad power
               | efficiency comes from that or architectural decisions in
               | their design can't really be said.
        
           | rstat1 wrote:
           | NVIDIA GPUs also run extremely hot, and also overclock
           | themselves when under any substantial load.
           | 
           | They even have a name for it: "GPU Boost"
        
             | smoldesu wrote:
             | I wouldn't say they run "extremely hot", my 1050 Ti powers
             | both my 1440p144hz display and my 1080p75hz one, and it
             | idles just over 32c for the most part. Under load though,
             | like all GPUs, they will push their cooling to the limit to
             | get as close to their junction temp as possible without
             | causing lasting damage.
        
               | KennyBlanken wrote:
               | I was speaking to TDP. The actual temperature the card
               | runs at is highly dependent upon room temperature, how
               | good the cooling solution is on the card, how well-
               | ventilated the case is, and of course the workload. Just
               | because windows says the GPU is "100"% doesn't mean it's
               | using maximum power; I've had games that make the fans go
               | crazy at "100%" and games that at "100%" the fans never
               | leave their default.
               | 
               | Every vendor makes multiple tiers of a particular GPU and
               | you generally get what you pay for, though with
               | diminishing returns for someone who isn't looking to
               | overclock to the edge and get a few extra FPS. Good
               | vendors starting a tier or two above their budget tier
               | tend to produce cards with thermal solutions that are
               | fairly quiet at stock speeds/voltages when gaming in a
               | properly ventilated case in a comfortably cool room.
               | 
               | On 1xxx series cards for example, MSI's Gaming X line
               | come with an absolutely massive heatsink. I had an MSI
               | 1060 Gaming X that was overclocked and fully loaded the
               | fans weren't noticeable, much less loud, and I had a
               | terribly ventilated case.
               | 
               | My current card is a 1070 ti, from a reputable
               | manufacturer and it's a mid-tier card. I have a pretty
               | aggressive fan curve on it because I want longevity
               | (capacitor lifetime drops appreciably with temperature.)
               | Even then, I almost never notice the fans, but I have
               | upgraded to one of the most well-ventilated $100-ish
               | cases.
               | 
               | Stay well clear of single fan GPUs if you intend to do
               | any gaming or compute work beyond "very occasionally."
               | The fan noise will drive you nuts and the card will
               | quickly thermally throttle.
               | 
               | If you care about your GPU lasting, lower the fan shutoff
               | point to be more like 35-40c instead of the usual 60c, or
               | just set them to run at the minimum speed all the time.
               | Fans are easy to replace; caps aren't...and a quality
               | card, the fans should be nearly silent at minimum speed.
        
               | formerly_proven wrote:
               | > Just because windows says the GPU is "100"% doesn't
               | mean it's using maximum power; I've had games that make
               | the fans go crazy at "100%" and games that at "100%" the
               | fans never leave their default.
               | 
               | There are even some games where the "GPU load" is 30 %
               | but the card is running at 150 % power and destroying its
               | VRMs :)
        
           | zokier wrote:
           | > I'd say raytracing was NVIDIA trying to distract everyone
           | from the fact that their GPUs had in fact been 'enough' for a
           | while
           | 
           | "Enough" maybe for 1080p gaming, but 4k remains still heavy
           | for the top tier GPUs even without raytracing; there are
           | plenty of titles where 3080ti will not hit steady 60 fps on
           | 4k, nevermind about 120 or 144 fps.
        
       | astlouis44 wrote:
       | WebGPU is the proper answer here, it's truly the future of cross-
       | platform graphics. As developers we will be able to ship powerful
       | 3D/VR experiences that "just work" everywhere that browsers
       | exist.
       | 
       | The key is getting the major game engines to support these
       | standards which is what my startup is working on.
       | 
       | We're currently focusing on bringing WebGPU + WebXR + improved
       | WASM support so real-time 3D apps and games can be deployed to
       | end users wherever they are, no walled gardens required. It's
       | also the path to the metaverse, which needs to be built on open
       | standards and protocols in order to be truly accessible to all,
       | and not vendor-locked by an entity that seeks to own it all like
       | Meta (FB).
       | 
       | If you're interested in hearing more or want to leverage our
       | platform that's approaching general availability we're building
       | at Wonder Interactive, you can join our Discord here:
       | 
       | https://discord.gg/3t8bj5R
        
       | KennyBlanken wrote:
       | Vulkan is not vendor-dependent. It runs on AMD, NVIDIA, Intel,
       | and Qualcomm chipsets. It runs on a huge number of operating
       | systems.
       | 
       | (Vulkan performance on the Pi 4 just got a big performance boost,
       | btw.)
       | 
       | It seems the only gotcha is that Apple is being difficult and not
       | supporting Vulkan on MacOS, but MoltenVK is "within a handful of
       | tests" of Vulkan 1.0 conformity.
        
         | lights0123 wrote:
         | And MoltenVK isn't just an experiment or hobby project, it's
         | already being used in production--one prominent example is
         | Autodesk Fusion 360, which even uses it on M1.
        
         | Mikeb85 wrote:
         | Apple is holding back both the open web and open graphics.
         | They're a cancer on the industry (and why I'll never give them
         | a penny nor ever develop for one of their platforms).
        
         | ncmncm wrote:
         | And, Kompute (kompute.cc) is a portable substitute for CUDA,
         | given Vulkan 1.1.
         | 
         | I wonder now whether the Vulkan API is an adequate interface to
         | securely expose GPU capabilities to multiple VMs, i.e. without
         | those VMs being able to see other VMs' memory. If not, what
         | would need to happen to get there?
        
       | oneplane wrote:
       | It will probably not happen as long as there is a commercial
       | incentive to not do that. What you will get is a subset of
       | features that can be common in a shared interface.
       | 
       | At this time a GPU manufacturer will conform to whatever
       | workloads will run on the device, instead of creating a shared
       | API and letting someone else choose how to use it.
       | 
       | Even if you just compare CUDA and OpenCL which makes use of the
       | GPU in non-graphics workloads. In theory this was a chance to
       | create a universal interface but in practise it turns out what
       | you can make more money if your special sauce is better than the
       | sauce of others, even if it's not better in all scenarios.
       | 
       | It seems the true agnostic interface will be on engine level
       | where a compute library or graphics engine will be running on top
       | of a bunch of non-agnostic interfaces and the real interface
       | you'll be working with is the interface of the engine.
        
       | fulafel wrote:
       | We have WebGL, so we have a near universally available cross
       | platform interface now. It lags the current hardware a lot but
       | seems to be the only actually working cross platform thing we'll
       | have for a while.
       | 
       | If you look at WebGL implementations, the heroic amounts of
       | driver bug workarounds found through trial and error over the
       | years and feature compromises due to said bugs, it's hard to
       | believe anything will match and replace it for a long time.
        
       | troymc wrote:
       | Pragmatically, many developers can just use the APIs provided by
       | Unreal Engine, Unity, Godot, Qt, or whatever and let _them_
       | figure out how to support various GPUs.
        
       | enos_feedler wrote:
       | The issue is that hardware systems have evolved in a very dynamic
       | way over the years. Initially GPUs were mostly fixed function
       | hardware with proprietary APIs for each driver. Then we had
       | Microsoft that was able to flex its platform power to build a
       | vendor interface in Direct3D. Now we have Apple doing the same
       | thing with Metal. The issue is going to be the ever increasing
       | integration of heterogenous compute engines in unique ways.
       | Google Tensor SoC, Apple A/M chips, etc. AMD is going to
       | increasingly license their IP to build out custom SoC for
       | companies like Samsung.
       | 
       | The best chance to build a standard on top of all this change is
       | to not view this as a "GPU abstraction" but rather a collection
       | of compute engines. At this layer, the place where this happens
       | will be compiler toolchain. IMO (as ex-NVIDIA compiler guy) the
       | best effort happening right now is MLIR [1]. I don't know if
       | dialec is the right abstraction but thats where things are at
       | right now. The best example is is IREE [2] which is written at
       | Google for mapping a lot of of the sophisticated translation
       | inference on mobile for acceleration across GPU, neural engines,
       | etc.
       | 
       | [1] https://mlir.llvm.org
       | 
       | [2] https://google.github.io/iree/
        
         | pjmlp wrote:
         | Nintendo, Sony and Sega were there first.
        
           | monocasa wrote:
           | Arguably SGI with IRIS GL. PS1 and N64 graphics APIs were
           | both heavily inspired by IRIS GL (N64 pretty much directly
           | inspired being a sawed down ultra budget SGI workstation).
           | And earlier consoles and Sega through the Saturn were more
           | "here's docs on the hardware registers and command lists"
           | rather than "here's an API to use".
        
         | ArtWomb wrote:
         | >>> https://google.github.io/iree/
         | 
         | Ambitious project ;)
        
           | enos_feedler wrote:
           | Yes it is. But if it doesn't work it probably never will.
           | This is the best chance we have.
        
         | dig1 wrote:
         | There is also Harlan [1], which created a bit of buzz a couple
         | of years ago. Sadly it is not maintained anymore...
         | 
         | [1] https://github.com/eholk/harlan
        
       | mkl95 wrote:
       | Yes, and I believe we will have it this decade. The intention is
       | there, so it's a matter of money and time, mostly time.
        
       | glitchc wrote:
       | We don't want a standard.
       | 
       | One size fits all doesn't work for customized hardware, as OpenGL
       | amply demonstrated, which is why it's going away. OpenGL started
       | dying the day NVidia introduced the GeForce series with the T&L
       | engine. That was followed by shaders, antialiasing, anisotropic
       | filtering and now raytracing and DLSS, not to mention innovations
       | on the compute side and GSync/Freesync implementations. Any kind
       | of standard acts as a bottleneck for new hardware. It's too slow
       | to adapt to new features because it requires review and
       | ratification, and then of course, an abstraction layer is needed
       | to support hardware missing those features. This adds complexity.
       | Standardized APIs are a dead-end, just like standardized CPU ISAs
       | are a dead-end. They are trying to solve the same problem in a
       | wrong-headed fashion.
       | 
       | Apple's hardware is completely different from NVidia. It makes
       | sense to have a completely different API to fully leverage the
       | silicon.
        
         | dathinab wrote:
         | > as OpenGL amply demonstrated
         | 
         | Not really, OpenGL had/has many problems but having a
         | standardized interface isn't one of them (I mean one problem is
         | the inconsistency between which extensions are available and
         | many extensions being (or at least starting of as) somewhat
         | Vendor specific in practice...)
         | 
         | Sure for technological advancement we will have vendor specific
         | code and you could say we want it.
         | 
         | BUT most applications do not need it. Even many games don't
         | need it. Weather most applications and any non-"AAA"-style
         | games they often want a easy way to have reasonable well
         | working graphics which also continue to work. Sure with such a
         | hypothetical API the "standardized" way will be updated with
         | new features and APIs from time to time and maybe some old
         | features will be deprecated and emulated instead of having
         | native hardware support but due to improvements in hardware
         | software written "back-then" would still work. WebGPU seems to
         | go in the right direction but Apple is messing it up again, but
         | then "works in the Web everywhere but Apple" is something I'm
         | already getting used to.
         | 
         | > It's too slow to adapt to new features because it
         | 
         | Most software just wants a frequently updated stable core set
         | of feature which get adapted _after_ they had been available in
         | non standardized APIs for a while.
         | 
         | > standardized CPU ISAs
         | 
         | Except that ALL ISAs today are standarized, sure with
         | extensions but the part most people use is standardized with
         | each ISA. Weather that's ARM, x86, RISC-V, etc. Standards are
         | what compiler makers and operating system designers rely on
         | when it comes to designing the "generic", "general purpose
         | parts" of their software. I mean hardly anyone would be using
         | e.g. RISC-V if there wouldn't be a standard about how the most
         | "core" instructions are handled.
         | 
         | > It makes sense to have a completely different
         | 
         | For native state of the art highly customized applications,
         | yes. For the rest. No. It's just pain and a lot of additional
         | development cost, and companies hate cost.
        
       | corysama wrote:
       | How are Vulkan and WebGPU vendor dependent?
        
         | jabl wrote:
         | From what I've read Vulkan leaves a lot of functionality and
         | capability as optional [1], meaning that the developer might
         | need to create multiple code paths for different HW.
         | 
         | [1] As an example, Raspberry Pi 4 was recently qualified as
         | Vulkan 1.1 conformant, even though the GPU lacks features
         | required to pass as an OpenGL 3.0 GPU.
        
           | corysama wrote:
           | Having worked in highly cross-platform graphics for a very
           | long time [1] I can say that you only get two options: 1)
           | Separate code paths. 2) Least common denominator of the
           | required target devices. For example: With the Pi 4 and
           | OpenGL, you can either limit all devices to the feature set
           | of the Pi 4 or you have a separate code path for more capable
           | devices.
           | 
           | [1] https://www.reddit.com/r/gamedev/comments/xddlp/describe_
           | wha...
        
       ___________________________________________________________________
       (page generated 2021-10-31 23:01 UTC)