[HN Gopher] How do I become a graphics programmer?
       ___________________________________________________________________
        
       How do I become a graphics programmer?
        
       Author : pjmlp
       Score  : 361 points
       Date   : 2023-11-22 20:54 UTC (1 days ago)
        
 (HTM) web link (gpuopen.com)
 (TXT) w3m dump (gpuopen.com)
        
       | lopkeny12ko wrote:
       | Ah, the AMD Game Engineering team, the same fine folks who
       | implemented driver-level "optimizations" for Counter Strike that
       | resulted in thousands of players getting permanently VAC banned.
        
         | voldacar wrote:
         | The vac bans got reversed.
        
           | qup wrote:
           | Temporarily permanently banned
        
           | lopkeny12ko wrote:
           | I fail to see how that excuses sloppy engineering.
        
             | charcircuit wrote:
             | It wasn't sloppy engineering unless you are saying AMD
             | should have used a VAC bypass
        
               | TillE wrote:
               | Injecting code into third-party DLLs is as sloppy as it
               | gets. It's an awful hack.
        
               | charcircuit wrote:
               | An AMD DLL always has been loaded into the game's
               | process.
        
             | voldacar wrote:
             | I agree with that, I was just correcting you. It's crazy
             | that they deliberately altered the code of a running
             | program like that, especially one which is going to have an
             | anti-cheat system.
        
         | account42 wrote:
         | You can't possibly hold AMD responsible for the behavior of
         | third party black boxes? If Valve makes assumptions about the
         | graphics drivers beyond the API then that's on them.
        
       | bsder wrote:
       | This isn't hard: DirectX 12 and C++ and Visual Studio (not
       | VSCode) and Windows on an NVIDIA card.
       | 
       | Vulkan basically isn't relevant anymore unless you are doing
       | Android. Metal similarly unless you are doing iOS.
       | 
       | As a Linux user, this pains me. But it's just life. Windows-land
       | is _soooo_ much better for graphics programming that it 's
       | absurd.
        
         | SeanAnderson wrote:
         | How does this sentiment align with the advent of WebGPU?
        
           | delta_p_delta_x wrote:
           | WebGPU is little more than a laboratory experiment right now.
           | There are probably _no_ industry implementations (game
           | /graphics engine, visualisers, etc). Computer graphics is
           | particularly industry-driven--consider the proportion of game
           | devs who present at SIGGRAPH versus academics.
           | 
           | I give it at least a _decade_ before WebGPU sees any
           | meaningful market share.
        
             | SeanAnderson wrote:
             | Bevy Engine (https://bevyengine.org/) is built ontop of
             | wgpu (https://wgpu.rs/) and runs in-browser today
             | (https://bevyengine.org/news/bevy-webgpu/)
             | 
             | Bevy is the second highest starred game engine on GitHub's
             | Game Engine topic: https://github.com/topics/game-engine
             | 
             | I definitely agree that it's still new, but I don't feel
             | like it's quite as far out as you're implying. A year or
             | two at best IMO?
        
               | delta_p_delta_x wrote:
               | GitHub stars aren't really an accurate indicator of
               | market share. I've previously starred it, too, but I've
               | never used it.
               | 
               | I'd like to draw attention to the last phrase in my
               | comment:
               | 
               | > WebGPU sees any _meaningful_ market share
               | 
               | I am comparing anything implemented in WebGPU to existing
               | games that are played _today_ by gamers.
               | 
               | Finally, it's using Rust, and the majority of graphics +
               | game engines are written in C++ (with a minority in Java
               | and C#). Despite the safety and tooling benefits, moving
               | to Rust is still a change that companies have to
               | implement and educate their developers on, which is going
               | to take a lot of time. And game dev companies are fairly
               | slow at adopting new language standards (even if they
               | adopt new _graphics_ APIs and hardware fairly quickly,
               | e.g. ray-tracing).
               | 
               | I don't quite share your optimism; sorry.
        
               | SeanAnderson wrote:
               | Well, I'm writing a game intended for the web using Bevy
               | right now, so I'm clearly biased :)
               | 
               | There are also some industry veterans building games
               | using it. https://www.mobygames.com/person/6108/brandon-
               | reinhart/ to name one specifically.
               | 
               | You can just use FFI if you want to integrate C crates
               | with Rust, it's not that bad. Figma uses a mixture of
               | Rust and C++ in the WASM that they ship.
               | 
               | Guess we'll see what the future holds.
        
               | delta_p_delta_x wrote:
               | All the best. I'll be happy to eat my words.
               | 
               | Rust is a great language (although I don't use it
               | myself), and WebGPU is pretty interesting as well.
        
             | Jasper_ wrote:
             | Chrome is switching to using Dawn (Google's WebGPU
             | implementation) for its Skia backend. This would render all
             | UI elements across Chrome using WebGPU. You can find plenty
             | of projects that are using WebGPU today. In the web space,
             | BabylonJS has had plenty of experience using it already,
             | and you can run several demos [0]. Offline, there are games
             | like Veloren [1] that use it exclusively as a graphics
             | backend. Plus a number of other projects I can't talk about
             | yet.
             | 
             | It's pretty obvious WebGPU is not going to replace any big
             | engine's custom-built graphics backend, but it's already
             | pretty capable, and I think it's going to be a good place
             | to start for beginners for a long time.
             | 
             | [0] https://github.com/Popov72/OceanDemo [1]
             | https://veloren.net/
        
           | kllrnohj wrote:
           | WebGPU needs to cater to the lowest common denominator so
           | it's unlikely to replace DX12/Vulkan/Metal in the demanding
           | usages. It's always going to lag behind on features,
           | capabilities, and performance.
           | 
           | But for the long tail of image filters, video effects, more
           | graphically basic games - yeah, it's a great fit there.
           | Probably.
        
             | pjmlp wrote:
             | To put it in perspective, WebGPU 1.0, that 6 years later is
             | still only available on Chrome, is the lowest common
             | denominator from 2015 GPU hardware.
        
           | pjmlp wrote:
           | WebGPU is a browser API.
           | 
           | Anyone making use of it outside the browser is making
           | themselves a disservice by not using a middleware engine
           | instead.
           | 
           | WebGPU as specified cannot do many modern features, and
           | naturally making use of extensions in wgpu or Dawn, makes the
           | code non portable.
        
         | all2 wrote:
         | Why is this the case?
        
           | bsder wrote:
           | Because the amount of resources that Microsoft and NVIDIA
           | pour at graphics programming dwarfs the amount of resources
           | that _everybody else combined_ seems willing to put into it.
           | 
           | The implementations are better. The support is better. The
           | ecosystem is better. The debugging tools are better.
        
         | 0xDEF wrote:
         | Academic computer graphics research is in many places still
         | using OpenGL 4.1.
        
         | slabity wrote:
         | > Vulkan basically isn't relevant anymore unless you are doing
         | Android.
         | 
         | Why do you say this?
        
           | delta_p_delta_x wrote:
           | I'm not the parent commenter, but I'd like to explain their
           | logic, which has at least a modicum of reason to it.
           | 
           | About 99% of desktop video games (by far the largest clients
           | of graphics APIs) target Windows, and therefore target either
           | Direct3D 11 or Direct3D 12. This includes free-to-use game
           | engines including CryEngine, Unity, Unreal, and Ren'Py.
           | Almost _all_ the famous, proprietary, high-performance game
           | engines (id Tech, Frostbite, Slipspace, REDEngine, Source)
           | target D3D exclusively. Vulkan is clearly a second-class
           | citizen on Windows. _Some_ engines target OpenGL, and they
           | tend to be used in (hurriedly dashed-out) console ports, but
           | in almost all cases they exhibit worse performance than their
           | D3D competitors.
           | 
           | Vulkan is completely absent from MacOS and iOS, where Apple
           | has pushed its own API, Metal. OpenGL on MacOS is deprecated
           | and is stuck on 4.1, missing all the advancements in 4.6,
           | which include mesh shader support.
           | 
           | Many Android games are likely still running GLES. Vulkan is
           | pretty hard to get started with, because things that are
           | implicitly handled by the OpenGL global state machine now
           | have to be explicitly handled by the developer, and chances
           | are the developers of the millions of throw-away
           | microtransaction-laden game apps on Android aren't writing
           | their own rendering engines in Vulkan.
           | 
           | Therefore, despite all the positives of Vulkan--open-source
           | specification, cross-platform support, SPIR-V shader target
           | allowing shaders to be written in any language (HLSL, GLSL,
           | other esoteric languages that compile to SPIR-V), an
           | extension mechanism allowing fast iteration and updates--it
           | has a fairly uphill battle.
           | 
           | EDIT: I was incorrect, id Tech supports Vulkan exclusively.
           | But it is a minority in a sea of D3D-first engines.
        
             | vivty wrote:
             | Maybe i am wrong, but this tweet and wikipedia directly
             | contradicts what you say (id tech does indeed use vulkan on
             | windows):
             | https://twitter.com/billykhan/status/1028133659168186368
             | 
             | I am just doing game dev on the side but i think nowadays
             | the graphics abstractions are fairly similar in how they
             | work (the modern abstractions, i.e. Metal, D3D12, Vulkan).
             | Of course ideally you choose the graphics abstraction that
             | is "native" to the platform, but vulkan seems to be
             | supported very well on windows (many AAA game use it and it
             | works great, many games run even better with vulkan
             | abstraction than with their d3d12 counterpart). I use vukan
             | so my graphics can run on windows and linux (which is why i
             | chose vulkan instead of d3d12).
        
               | dagmx wrote:
               | You are correct that idTech targets Vulkan (and they have
               | some great GDC talks to boot)
               | 
               | They are however very much the minority.
               | 
               | I am suspect of your claim about Vulkan abstraction
               | layers running better than DX12. If there is a
               | performance difference, it's likely elsewhere in the
               | stack and just tangentially related.
        
               | mabster wrote:
               | I'm surprised by that as well.
               | 
               | I haven't done this stuff for quite a while, so my memory
               | might be foggy, but the main advantage of Vulcan was that
               | you can control all the CPU locking rather than the API
               | doing it. This allows you to do stuff like prepare on one
               | thread and submit on another, etc.
               | 
               | But that would be negated if you're using an abstraction
               | layer.
        
             | spookie wrote:
             | iD Tech, Source 2, Unreal, and Unity support Vulkan.
             | 
             | iD Tech targets Vulkan exclusively on PC:
             | https://twitter.com/billykhan/status/1028133659168186368
             | 
             | Other points are also blatantly untrue, but I think I have
             | made my point. At this point, targeting only DirectX is
             | shooting yourself in the foot.
             | 
             | Other references:
             | https://docs.unity3d.com/Manual/GraphicsAPIs.html
             | https://www.khronos.org/news/press/khronos-group-releases-
             | vu... https://docs.unrealengine.com/5.3/en-US/supported-
             | features-b...
        
               | delta_p_delta_x wrote:
               | While I was incorrect about id Tech (and have edited my
               | comment), I never made the point that any of the other
               | engines _didn 't_ target Vulkan.
               | 
               | Where else is my comment untrue? Many engines and
               | rendering back-ends have only recently completed a
               | Vulkan-based implementation. I am confident in my
               | assessment that the large majority of existing
               | implementations are still running OpenGL and/or Direct3D,
               | if on Windows.
        
             | spookie wrote:
             | > Almost _all_ the famous, proprietary, high-performance
             | game engines (id Tech, Frostbite, Slipspace, REDEngine,
             | Source) target D3D exclusively.
             | 
             | You've said it there, hence my reply
        
           | dagmx wrote:
           | I'm not the person you asked but my 2c (since I agree with
           | their point on Vulkan)
           | 
           | Very few video games are made with Vulkan. DirectX is the
           | primary API.
           | 
           | Android is the only place where Vulkan really has an
           | interesting market share.
           | 
           | For a beginner, it has an incredibly steep learning curve vs
           | DirectX as well. So given the low usage and high friction to
           | pick it up, you have a really poor ROI.
           | 
           | DirectX and Metal are much more conducive to getting results
           | quickly and efficiently.
        
             | account42 wrote:
             | > For a beginner, it has an incredibly steep learning curve
             | vs DirectX as well.
             | 
             | That's only because you refer to DirectX as a whole which
             | includes plenty of older and APIs. If you want to start
             | with those you can just as well start with OpenGL. if you
             | want to jump straight into D3D 12 then that's not much
             | differrent from Vulkan.
        
               | dagmx wrote:
               | The topic was why not Vulkan for a beginner, and D3D12 is
               | a lot less work than Vulkan to get the same results.
               | 
               | And I'd still recommend D3D11 over OpenGL for a beginner
               | unless they really need multi platform. There are better
               | resources, and less setup work up front.
               | 
               | Honestly though, if I was recommending any graphics api
               | to start, it would be Metal. It has the best mix of ease
               | of use to modern low overhead api.
        
           | moron4hire wrote:
           | IDK, all the games I try to run on Linux seem to work better
           | in Windows/DX emulation rather than native/Vulkan.
        
       | theodpHN wrote:
       | Practice.
       | 
       | https://www.carnegiehall.org/Explore/Articles/2020/04/10/The...
        
       | stemlord wrote:
       | I would suggest that beginners not lead with "which tools should
       | I capitalize on" and instead, take a step back and ask "what do I
       | want to make"? Don't lose focus on the final output as you make
       | your first steps. In the world of computer graphics today there
       | are so many tools that abstract away various steps in the process
       | of drawing pixels to the screen that you could very easily waste
       | too much time suffering with low level code up-front and then
       | later realize that the niche field in the wide array of
       | industries that utilize graphics programming that you want to
       | pursue actually only hires people who use Unity, TouchDesigner,
       | threejs, and after effects and don't actually write a lick of C++
       | unless push comes to shove, and even then they might just
       | contract an outside engineer for it.
       | 
       | Not to say that learning how things work on the ground level
       | isn't immeasureably valuable, but I think trying to do that first
       | is the slow approach. Learning is accelerated when 1) you enter
       | the industry (so you should prioritize output up-front and let
       | the deeper learning happen when you're earning a paycheck for it)
       | and 2) you get a better conceptual understanding of what's
       | happening under the hood offered by tools of abstraction like a
       | game engine or visual programming paradigm.
       | 
       | This comes from someone who spent many years trying to learn cpp
       | and opengl the hard way only to endure a long battle against an
       | internal sunk cost fallacy I've harbored that kept me from taking
       | the no-code approach. Don't waste your time taking this path if
       | it doesn't help you make what you actually want to be making at
       | the end of the day.
        
         | captainkrtek wrote:
         | This is great advice. I think it's a common trap when I see
         | questions like "what language should I learn/what language is
         | the best", which skip the point of "what would you like to
         | build". The tools change with time, and the best engineers in
         | my experience generally know how to use a variety of tools with
         | varying degrees of skill rather than be super deep in a single
         | one.
        
         | ryandrake wrote:
         | Listen to this guy, great advice. Early in my career, I set out
         | to become an "OpenGL expert" and I'd say I mostly got there. I
         | mean I'm no Mark Kilgard and haven't written any textbooks, but
         | I dove super deep into the technology, and gained at least a
         | decade of experience working on all levels of the API from the
         | driver level to conformance tests and performance tuning, up
         | the stack to game and application code, and across each major
         | desktop and mobile platform.
         | 
         | Where did it get me? Not very far, really. First of all, almost
         | nobody cares about OpenGL anymore--it's kind of dead with the
         | two major OS vendors finally abandoning it. Go to any "HN Who's
         | Hiring" and text search for OpenGL. Sure, I could have gone and
         | re-skilled and learned another similar graphics API, but the
         | second problem is nobody really needs people who write low-
         | level Direct3D or Vulkan or Metal anymore because that's all
         | abstracted for you by engines. And there are max 5 or 6
         | companies in the world that even have the need for people who
         | can do low-level graphics drivers. It's a career-limiting
         | niche.
         | 
         | The smaller the piece of the machine you focus on, the more of
         | a world-class expert you need to become in order to make it
         | your whole career. So, unless your plan includes becoming the
         | next John Carmack or something, I'd recommend going broad
         | rather than deep.
        
           | raincole wrote:
           | But without people like you (who understands the low-level
           | graphics programming), the development of engine will
           | completely stagnate.
        
           | vlovich123 wrote:
           | I feel like better expertise targets tend to be more durable.
           | In other words, rather than expert in a specific technology
           | or technique, the best experts had the ability to develop
           | expertise in any given technology within a space and often
           | overlapped cursory knowledge with other spaces. I met plenty
           | of graphics experts at Oculus and they didn't care so much
           | about whether it was D3D or Vulkan - originally they were D3D
           | engineers for PCVR and then a good chunk of them shifted to
           | Vulkan once the focus shifted to mobile VR. They just knew
           | how those APIs mapped to the actual HW reality, how things
           | connected, why things were slow, how to improve performance,
           | etc. The mundane stuff of "what is the sequence of steps to
           | do X in Vulkan" is answered by Google/StackOverflow (or even
           | these days ChatGPT). Heck, a good chunk of them were creating
           | their own new APIs. This isn't unique to Meta by the way.
           | It's like engineers who say they're "C" or "C++ experts".
           | With the exception of authors like Scott Meyers or people
           | working on the C++ language spec who I think can truly maybe
           | claim that title, the kind of thing that is called a
           | "language X expert" is the kind of expertise that a good
           | engineer should be able to develop in any language with 2-3
           | years of practice and proficient mastery within ~3-12 months
           | because the true expertise is the underlying CS principles
           | (at least for a family of languages - I've never done too
           | much with non-Algol families so I don't know how I'd fare
           | there).
           | 
           | However, I do agree that generally graphics engineer is a
           | niche limited to the few people working on gaming engines, VR
           | R&D, or animation R&D. But those skills, at least today, are
           | generally transferable to AI engineering because GPU compute
           | plays such a huge role. There's less graphics programming of
           | course and the APIs for GPU compute are a bit different, but
           | AFAIK many of the hardware concepts remain (e.g. wavefronts,
           | how GPUs do threading, etc etc).
        
             | pjmlp wrote:
             | When people like Bjarne Stroustoup, Herb Suttter, Andrei
             | Alexandrescu say they are by no means a C++ expert, always
             | beware of anyone that says otherwise.
             | 
             | Same applies to most languages, unless they are talking
             | about toy languages.
             | 
             | Even something like C or Go, have so much room to debunk
             | such experts. Between compilers, versions, language
             | evolution, runtime, standard library, OS specific
             | behaviours,....
        
               | loup-vaillant wrote:
               | > _When people like Bjarne Stroustoup, Herb Suttter,
               | Andrei Alexandrescu say they are by no means a C++
               | expert,_
               | 
               | Then you know something is deeply wrong with C++. If even
               | _they_ aren't experts, that just means the language,
               | despite being a human made artifact meant to encode human
               | thought, is beyond human comprehension.
               | 
               | That's kind of a problem, isn't it?
        
               | pjmlp wrote:
               | While it is kind of fun bashing C++, I also noted _" Even
               | something like C or Go, have so much room to debunk such
               | experts. Between compilers, versions, language evolution,
               | runtime, standard library, OS specific behaviours,...."_.
               | 
               | Anyone that thinks otherwise, we can arrange a pub Quizz
               | in Germany, I get the questions, audience has to give up
               | any kind of device with Internet connection.
        
               | loup-vaillant wrote:
               | Many times when someone says "X-lang", they actually mean
               | the entire ecosystem, practices, third party libraries...
               | And for anything popular enough it is indeed impossible
               | to be an expert in all of that. With C++ that's still the
               | case even if "C++" only means the language itself.
               | 
               | I'll concede that C with the insane breadth and
               | (over)reach of UB, is closer to C++ than I would like.
               | And that's a problem too.
               | 
               | I don't know Go well enough to judge.
        
               | pjmlp wrote:
               | We can start with level 1 questions, "name which version
               | where this feature came to be".
        
               | loup-vaillant wrote:
               | Okay, you could... and if you're honest enough to accept
               | answers like "it's over 20 years old" I guess any expert
               | could answer that. My main point remains though: if even
               | Stroustrup or Stutter can't answer 95%+ questions of this
               | kind, it would show beyond a doubt that C++'s complexity
               | got completely out of hand.
               | 
               | Even Stroustrup's humble bragging about being a "7" at
               | C++ looks real bad. If I'm not a 10 at a language I
               | created and maintained my whole life, I've birthed a
               | monster.
        
               | pjmlp wrote:
               | My example question was actually in regards to Go, made
               | public in 2009, with 14 years of history.
               | 
               | The bonus round of the same question would be, "name one
               | feature that was removed before 1.0".
               | 
               | We could make this question even more fun, if taking into
               | account gccgo specific extensions, or runtime changes as
               | well.
               | 
               | To put it bluntly, if someone shows up calling themselves
               | an expert, I expect World Cup skills towards the language
               | they are an expert on, across all levels.
        
               | The_Colonel wrote:
               | I get the overall point, but this humility makes it
               | difficult to use some kind of grading/levels.
               | 
               | Take e.g. MS Office as an example of a large C++ project.
               | Certainly, there are developers with just "average"
               | knowledge of C++ working there. Then there are people
               | having strong/advanced C++ knowledge.
               | 
               | But in a project like MS Office there are certainly devs
               | who are still much stronger in the language, best in the
               | project/company, but still likely below people like
               | Stroustoup or Alexandrescu. How to call those? I think
               | avoiding the term "expert" just to be consistent with the
               | above-mentioned humility is impractical.
        
               | pjmlp wrote:
               | Humility regarding on how much one actually knows, is a
               | common trait in senior developers.
               | 
               | So when one sells themselves as uber expert, that is
               | already a kind of red flag.
        
           | matheusmoreira wrote:
           | That's pretty sad. Without people like you, the engines all
           | the others use would not exist. It always saddens me to see
           | people making millions off of high level tools while the
           | people who made it possible get limited returns.
        
           | flohofwoe wrote:
           | Focusing on a single technology never was a great idea
           | though. Even already towards the end of the 90's D3D was the
           | better choice on Windows, so one had to write code against
           | multiple 3D APIs anyway. This also gives a better perspective
           | where the strengths and weaknesses of the different
           | technologies are and it makes it easier to learn new APIs.
           | But in the end, 3D APIs are just a utility to write
           | applications (mostly games), not to build one's career upon.
           | 3D APIs come and go and are (to some degree) also subject to
           | fashion cycles, the underlying GPU hardware develops a lot
           | more predictable than 3D APIs (e.g. especially Vulkan had
           | pretty bad "mood swings" recently).
           | 
           | Of course when focusing strictly on money and "career
           | growth", going into game development is a pretty bad idea to
           | begin with ;)
        
           | p0nce wrote:
           | I'll join the choir and say to not ever try to become an
           | OpenGL expert. It's deep and kind of useless. First of all,
           | if your application use OpenGL, either:
           | 
           | - you've unlimited resources for testing and vetting drivers,
           | and you're working for huge CAO software that can tell users
           | what to use.
           | 
           | - you don't have huge resources and then your only hope if to
           | use an abstracted API like Skia, GDI, bgfx, IGL, WebGPU,
           | anything other than using OpenGL directly.
           | 
           | It will just never work everywhere if you use OpenGL
           | directly, and you won't be able to debug the driver or find
           | all OS x drivers x GPU combination you need to debug it.
           | 
           | Your OpenGL-specific skills (avoiding driver bugs) will have
           | very little value, but 3D-specific skills could. It's a waste
           | of time that is only rivalled by learning C++. I would
           | concentrate on abstracted API that avoid those driver bugs,
           | or even learning a game engine to stay topdown.
        
             | pjmlp wrote:
             | This is something that those arguing for OpenGL as a kind
             | of universal 3D API never get, the amount of code paths can
             | scale to the point that it is like using multiple 3D that
             | just happen to be all called OpenGL.
        
           | 4death4 wrote:
           | Going broad rather than deep cuts both ways. Sure, you have
           | more employment opportunities, but the value of your
           | individual contributions has a ceiling. That means after a
           | relatively short period of time, you plateau career wise.
           | Like is a generalist with 15 years of experience really that
           | much more valuable than one with 7 years? Not really. So if
           | you want to progress beyond a generic senior engineer, then
           | you need to specialize in something.
        
             | usrusr wrote:
             | But how many of those deep specialists got there by
             | rationally weighing pros and cons and then picking a
             | strategy?
             | 
             | My guess would be that for every example who got there on
             | the curriculum drawing board, there are at least two who
             | just happened to be in the right place at the time the
             | technology grew, four who got infatuated with "their"
             | technology so much they'd specialize no matter the pay
             | relative to generalists, and eight would-be generalists who
             | failed to keep their generalization in balance through a
             | sequence of projects built on top of the experience of
             | those before.
             | 
             | GP mentioned Carmack, that outlier of outliers. He did not
             | get there by picking one technology and burying himself
             | deep, he did whatever was practical. At the time IrisGL
             | begat OpenGL, Carmack was battling the limitations of EGA
             | and CGA in the Commander Keen series and then moved on to
             | create the 2.5D wonders that followed. But he was more than
             | ready to retire his world class expertise in software
             | rendering when OpenGL came into reach of PC hardware.
             | Textbook generalist behavior.
        
           | david-gpu wrote:
           | _> Sure, I could have gone and re-skilled and learned another
           | similar graphics API, but the second problem is nobody really
           | needs people who write low-level Direct3D or Vulkan or Metal
           | anymore because that 's all abstracted for you by engines.
           | And there are max 5 or 6 companies in the world that even
           | have the need for people who can do low-level graphics
           | drivers. It's a career-limiting niche._
           | 
           | I did that for a living and can only agree partially.
           | 
           | First, the part were we agree: only a handful of companies
           | hire graphics driver developers. This means that if this is
           | your career you need to be willing to either put up with the
           | idiosyncrasies of your employer or be willing to move
           | geographically. As a result, people tend to stick to the same
           | employer for many years.
           | 
           | As for OpenGL becoming obsolete, it's like anything else in
           | tech: you need to keep up with whatever is in demand.
           | Vulkan/Metal/DX12 didn't appear out of thin air, they were
           | created by the exact same folks who worked on older APIs for
           | many years, so it really wasn't a huge paradigm shift for
           | driver developers.
           | 
           | GPU driver development is a perfectly valid career choice
           | with good job stability and very decent pay.
           | 
           | What I disliked about it is that, perhaps contrary to what
           | you are saying, I felt that it was rather repetitive and
           | after having worked on a few different GPU generations.
           | Innovation happens in other areas like GPU architecture, not
           | in driver development, but that's a topic for another day.
        
           | pandaman wrote:
           | >It's a career-limiting niche.
           | 
           | This depends on your career aspiration. There are not as many
           | companies hiring graphics programmers as there are shops who
           | need "front end" or whatever they call scripting web pages
           | nowadays but the barrier to entry is rather high so there are
           | plenty of jobs in every FAANG plus Microsoft, Tesla, self-
           | driving shops (I even have been approached by self-flying
           | robots startups couple of times), training (from military to
           | oil rigs), of course, the every game studio (which may or may
           | not pay little and force you to work overtime, I am pretty
           | sure a graphics programmer at, say, Roblox has better
           | compensation and working condition than a front-end developer
           | at Amazon, for example), and yes, the 5 or 6 companies that
           | need drivers (Apple, NVidia, AMD, Qualcomm, Samsung, ARM,
           | Intel, etc).
        
         | TheRoque wrote:
         | Well, as someone looking to get into this field, I kind of
         | disagree. A lot (if not all) job postings about graphics
         | programming require you to know C++ beforehand. Sure, you can
         | have another role, like gameplay programmer, and slowly work
         | your way in graphics, and it's probably easier to do so, but in
         | fine, for the graphics programmer role, C++ is required.
         | 
         | What you are describing is more about someone who wants to be
         | productive with graphical stuff fast, but it's not graphics
         | programming.
        
           | pjmlp wrote:
           | First learn the foundations, then the language.
           | 
           | When I started, graphics programming was all about Assembly.
           | 
           | Then it was about Object Pascal and C, then it was about C++,
           | now it also requires C#.
           | 
           | And who knows, maybe in 20 years, one of the C++ wannabe
           | replacements manages to also have a spot, or some AI driven
           | thingie.
           | 
           | Those will solid foundations in graphics programming
           | algorithms, will manage regardless of the language.
        
             | dahart wrote:
             | > now it also requires C#.
             | 
             | For Unity games? What else? Just curious. I've been doing
             | graphics programming for decades, and C# has never been
             | part of it, and still isn't on my radar.
        
               | pjmlp wrote:
               | Yes, plenty of Unity shops, and most of the companies
               | doing VR/AR are using Unity.
               | 
               | Not only Unity, Capcom has their own C# toolchain.
               | 
               | One example of a Capcom game that makes use of it is
               | Devil May Cry for PlayStation 5.
               | 
               | Unreal build system is based in C#, although here is a
               | minor role, and naturally debatable.
        
           | meheleventyone wrote:
           | There's two or three strands of graphics programming though:
           | 
           | Plumbing - Delivering data efficiently from the game engine
           | to the GPU often in a platform agnostic way with efficient
           | implementations underneath.
           | 
           | Art Pipeline - Delivering data efficiently from the artist
           | tools to the game engine.
           | 
           | GPU Programming - Creating visual effects, shaders, compute
           | shaders and tools around these to empower artists.
           | 
           | All of these use multiple languages, sure C++ is a common one
           | and good to know (likewise as a gameplay programmer) but the
           | bigger percentage of what you need to know as a graphics
           | programmer isn't how to write C++ but the concepts you're
           | trying to implement with it.
           | 
           | There's also R&D but it's a much smaller part of things.
        
           | flohofwoe wrote:
           | People shouldn't be disillusioned though when they find out
           | that there's not such a big need for the traditional role of
           | graphics programmer who wrestles directly with lighting
           | models in shader code and shadow rendering implementations.
           | 95% (or so) of game industry jobs is plumbing engine-provided
           | components together, writing some very high level gameplay
           | code and maybe a bit of inhouse tools development. The hard
           | rendering tasks are done by a handful engines now, and
           | currently the pendulum seems to swing away from inhouse
           | engines towards UE5 again.
        
           | dahart wrote:
           | > but it's not graphics programming.
           | 
           | As someone who's been in the field for a long time, I kind of
           | disagree. Game engine jobs don't define the boundaries of
           | what "graphics programming" means. That is a very narrow and
           | specific kind of graphics programming, and there's lots more
           | graphics programming than just games. I'd probably even
           | recommend people wanting to do games graphics programming to
           | start outside of games, because the engine consolidation with
           | Unity and Unreal has brought with it an overall reduction in
           | the number of people doing graphics programming for games.
           | There's indie games and a few studios that still do their own
           | engines, but there are a bunch of other industries that hire
           | graphics, for film and effects, for 3d tools, for
           | visualization, for mobile/web/desktop applications, for
           | science & research, for VR/AR, for industrial applications,
           | etc., etc.
           | 
           | Being fluent in C++ can only help you, so do work on that.
           | Games engines need more and more people who can do ray
           | tracing and neural nets, and some of the old guard of raster
           | API experts didn't learn as much about those things, so that
           | is one angle of attack for getting in. Another is to be very
           | fluent in the math of graphics, and you can learn and
           | practice that in any language, and find non-games jobs that
           | will pay for it. Math experts are generally more valuable and
           | hard to find than API experts.
           | 
           | FWIW, my path was graphics in school, undergrad + MS, then
           | scientific visualization for a university, then C programming
           | for a CG movie studio, then I was hired to do games engine
           | graphics programming (precisely what you're talking about)
           | and I moved into a tools generalist and gameplay lead role
           | instead (which was more fun than wrangling shader
           | compilation), then I went to a web app company and did 2d
           | graphics while learning JavaScript & WebGL, then started my
           | own web app company using javascript & WebGL, and then
           | switched to doing CUDA and ray tracing with OptiX. Along the
           | way, lots of fun graphics projects in Python. All of this
           | involved real "graphics programming", and almost all of it is
           | outside the boundary line you drew. I say this not for
           | argument's sake, but to ideally give you more hope, and to
           | help you & others see there's a wider range of options to get
           | into game graphics than 'learn C++ & DX'.
        
         | nullptr_deref wrote:
         | This is hands down the reality and the best advice out there.
         | No one cares if you are able to do vulkan for 8 years EXCEPT
         | for research labs.
         | 
         | And getting there requires that - you either have a PhD or the
         | exact experience (threeJS, unity etc) as above because that
         | will help you set foot on industry effectively allowing you to
         | work on higher abstraction and slowly/rapidly decent into low
         | level code.
        
         | beAbU wrote:
         | This is the classic trap that new developers step in all the
         | time. It's not just associated with graphics programming.
         | 
         | Youtube is full of beginner programming videos that take the
         | developer through a journey of learning a stack of technologies
         | in stead of focusing on something interesting to build. You end
         | up with a lot of cargo-culting, and these massively complex
         | Ruby Goldberg contraptions to render some text on a web-page.
         | All in the name of padding out that CV.
         | 
         | When I was in university I dabbled with some graphics
         | programming for a semester, and I found it was sufficiently
         | complex that the only reasonable answer to "what do I want to
         | make" was "render a green triangle on a black background".
         | Going from there to square, to cube, to sphere, to animated
         | sphere, to bouncing ball, is a logical progression and it helps
         | you stay focussed on the prize. So don't also make the mistake
         | of answering the above question with "first person shooter with
         | ray-traced lighting and subsurface scattering".
         | 
         | I can promise you the first iteration of a bouncing ball will
         | be truly horrible code. But that's fine. Over time you'll
         | figure out how to optimise and improve things. And let me tell
         | you, there's nothing as invigorating as discovering a design
         | pattern by yourself: reading a book on a new topic and going
         | "hey I'm already doing that!"
        
           | dgb23 wrote:
           | Tangent:
           | 
           | And if you then tell about the pattern/technique/architecture
           | etc. to someone who's been around for a while:
           | 
           | ,,Well, we already did that in the 80's, it was called
           | Foobar. We just had to do it barefeet in the snow with our
           | hands tied to the back."
        
         | DanielHB wrote:
         | Much like sorting algorithms I still think there is some value
         | of teaching this kind of low-level programming in college. You
         | gain a lot of theoretical knowledge as well as a lot of heavy
         | complex algorithm training even if you never end up doing that
         | kind of work in a real job.
         | 
         | With graphics it also gives you a lot of applied _math_
         | experience, there are a TON of fields that are desperate for
         | people who can do math. Just my last job (software for CNC
         | machines) we needed people who could do program the math
         | necessary to program a drill to but a certain shape into a
         | metal block and it was really hard to find these people. Yet,
         | for example, cloud devops engineers, although expensive, were
         | readily available
        
       | Buttons840 wrote:
       | I'd recommend the Pikuma course Graphics From Scratch[0]. The
       | first thing you do is write a set_pixel function utilizing SDL
       | and the rest of the course is all your code, every matrix
       | operation, every vertex transformation, every triangle
       | rasterization. You calculate what every individual pixel should
       | be colored.
       | 
       | [0]: https://pikuma.com/courses/learn-3d-computer-graphics-
       | progra...
        
         | charcircuit wrote:
         | >No GPU, no OpenGL, no DirectX!
         | 
         | This is the opposite of what you would hope to see for learning
         | graphics programming.
        
           | CountHackulus wrote:
           | I disagree. It's good to understand what the GPU is doing at
           | scale before jumping in too deep. It's obviously not how you
           | do your day-to-day work, but it helps you understand what's
           | going on at a deeper level.
        
           | robomartin wrote:
           | Believe it or not, we were doing graphics without any of
           | those things a very long time ago. Learning fundamentals is
           | really important. You can always learn easier ways to do
           | things later.
           | 
           | For example, area fill algorithms are really interesting.
           | Etc.
        
           | ChuckMcM wrote:
           | I feel like this comment perfectly captures the difference
           | between programming and coding.
           | 
           | Programming comes from a place of first principles, the goal
           | being to understand what is needed completely so that a
           | solution that meets many parallel constraints can be
           | constructed.
           | 
           | Coding comes from a place of completing a task, the goal
           | being to get from the requirement to the operating task in as
           | short a time as possible so that one might move on to the
           | next task.
           | 
           | Both disciplines have value. The original question was
           | unclear about where the author hoped to end up.
           | 
           | To put this in a slightly different perspective, a graphics
           | _programmer_ can write a program to show a shaded object on
           | any platform with a CPU and a way to display graphics. A
           | graphics _coder_ can write a program to show a shaded object
           | only on those platforms where they have previously mastered
           | the APIs for generating display graphics.
        
             | NikolaNovak wrote:
             | I like the distinction, and I think there should exist
             | different terms, but I don't think those two are nearly
             | universal. Many people will use them interchangeably,
             | others may even think other way around ("programming" as a
             | professional discipline where you use tools to achieve an
             | MVP most efficiently, vs "coding" as an enthusiastic
             | activity that's more creative and open ended).
        
             | charcircuit wrote:
             | Your definitions are made up.
             | 
             | Programming refers to the act of writing a program for a
             | computer to follow.
             | 
             | Coding refers to the act of writing code for a computer.
             | 
             | Edit: The definitions are not commonly used this way and in
             | fact I've heard other people even give opposite definitions
             | to what correponds to which word.
        
               | javajosh wrote:
               | Hate to break it to you, but all definitions are made up.
               | The question is whether or not the distinction is useful
               | or not. In my opinion, it seems useful to distinguish
               | between understanding oriented tasks and goal oriented
               | tasks.
        
             | dragontamer wrote:
             | 2D graphics with bitblits is a completely different
             | paradigm. If you were using 2D GPUs from the 90s, or maybe
             | a 2D Industrial PC's GPU, sure... Learning about Bitblits
             | and rects is good.
             | 
             | But if your goal is to program a modern shader on a modern
             | platform (even if it's a 2D graphic), you should learn a
             | modern graphics library.
             | 
             | -------
             | 
             | A modern graphics API is laid out the way it is: to
             | maximize modern performance on modern systems. A first
             | principles bottom up approach will absolutely cover shaders
             | (maybe compute shaders are easiest?)
        
               | crq-yml wrote:
               | There's a lot of leeway to learn 3D principles through a
               | basic software rasterizer. It does not take long - if you
               | already have some awareness of the math, it's at most a
               | few weeks to work through the essentials from a tutorial.
               | Once you get to the point where you're drawing filled
               | triangles through a projected camera, you can move on.
               | There's no need to go into occlusion culling strategies,
               | texture mapping or lighting calculations, or really to
               | make the rasterizer fast in any way. That wouldn't be the
               | point of the exercise. It could be done entirely with a
               | fixed-size bitmap and a setPixel(x, y, v) call.
               | 
               | The argument against "just" using the hardware is that
               | the hardware resists learning that conceptual skeleton.
               | Instead of learning how a rasterizer is implemented, you
               | learn the specific API incantation to bring one up, and
               | that has changed a lot over the years in the direction of
               | being a more professionalized phenomenon, so now it's
               | much easier to start application top-down, from within a
               | premade rendering environment like a game engine or
               | Blender's rasterizers.
               | 
               | Learning to work on production rendering engines would
               | involve reading and studying existing implementations,
               | reading the relevant research papers along the way.
        
               | dragontamer wrote:
               | > you learn the specific API incantation to bring one up
               | 
               | The graphics pipeline from geometry -> vertex shader ->
               | pixel shader -> image is the damn point.
               | 
               | When the literal hardware is laid out in a certain way,
               | you must learn that hardware layout and understand it.
               | These API 'incantations' as you put it aren't high magic.
               | They logically flow from the requirements of modern
               | graphics programming.
               | 
               | If you just skip that stuff, you won't ever learn about
               | modern GPUs, modern CPU->GPU data transfers, GPU
               | parallelism or how a CPU calls the GPU routines to
               | render.
               | 
               | Maybe you can make the argument that you should learn
               | rasterizing first to simplify the learning process. But I
               | would argue that it's easy enough to learn rasterizing
               | when you get to the Pixel Shader step.
               | 
               | > The argument against "just" using the hardware is that
               | the hardware resists learning that conceptual skeleton.
               | Instead of learning how a rasterizer is implemented, you
               | learn the specific API incantation to bring one up, and
               | that has changed a lot over the years in the direction of
               | being a more professionalized phenomenon, so now it's
               | much easier to start application top-down, from within a
               | premade rendering environment like a game engine or
               | Blender's rasterizers.
               | 
               | The argument against that is that you eventually have to
               | learn today's hardware anyway. So you might as well start
               | now.
               | 
               | Tomorrow's hardware is based on today's hardware. And
               | today's hardware is based on yesterday's hardware.
               | 
               | It's all incremental progress. I'd personally say that
               | OpenGL with GLSL might kinda sorta look like modern stuff
               | (vertex and pixel shaders), but anything older (ex: 90s
               | BitBlits) is so old it's just completely a waste of time.
        
             | lackbeard wrote:
             | Good comment, and, basically, I fully agree, except, I
             | really dislike your attempt to appropriate the words
             | "programming" and "coding" here. Like, can you just explain
             | what you mean without trying to redefine terms that have
             | broadly accepted definitions distinct from how you're
             | trying to use them here?
             | 
             | (Sorry, this probably sounds more critical than I'm
             | intending...)
        
               | ChuckMcM wrote:
               | No worries, got to use something as the holder of the
               | definition. FWIW I read a similar essay that discussed
               | cooks and chefs and came away seeing the many parallels
               | with programming and coding.
        
               | caslon wrote:
               | Programming requires creative thinking. Coding,
               | historically, was a lower-paid, unskilled position.
               | 
               | You may think this is redefinition. It's not. This is how
               | both terms _originated._ A  "coder" did the unskilled
               | gruntwork of implementation for business projects, while
               | a programmer was holistic.
               | 
               | > I find it bizarre that people now use the term "coding"
               | to mean programming. For decades, we used the word
               | "coding" for the work of low-level staff in a business
               | programming team. The designer would write a detailed
               | flow chart, then the "coders" would write code to
               | implement the flow chart. This is quite different from
               | what we did and do in the hacker community -- with us,
               | one person designs the program and writes its code as a
               | single activity. When I developed GNU programs, that was
               | programming, but it was definitely not coding.
               | 
               | > Since I don't think the recent fad for "coding" is an
               | improvement, I have decided not to adopt it. I don't use
               | the term "coding", unless I am talking about a business
               | programming team which has coders.
               | 
               | https://stallman.org/stallman-computing.html
               | 
               | In this case, it is you, and the wider cottage industry
               | of business "coders" who are doing the appropriation.
               | It's not your fault. The bootcamp or Super Cool Totally
               | Serious College you likely learned from probably used the
               | term "coder" alongside words like "rockstar!" You were
               | given a bad definition, and never knew any better at all.
               | ChuckMcM's comment, on the other hand, is correct. Using
               | the word "literally" to mean "figuratively," while
               | colloquial, is still less correct than figuratively.
               | 
               | The phrase "code monkey" is not a compliment, and didn't
               | come from nowhere. It came directly from these pre-
               | existing definitions.
               | 
               | Programming requires logic. If you are coding, the
               | thinking's already been done for you. You are just doing
               | unskilled labor akin to data entry to get the computer to
               | actually follow the instructions.
        
             | dkjaudyeqooe wrote:
             | There is a lot to be said for understanding things from
             | first principles, but not everyone wants to go that low.
             | Not everyone is that nerdy or maybe hasn't the time or
             | motivation.
             | 
             | The other issue is that not everyone learns well from the
             | bottom up. I believe that top down is a much better way of
             | learning anything, and lets you progressively get closer to
             | first principles while not discouraging learners with a
             | steep difficulty curve. But unfortunately is an approach
             | that is seldom facilitated by anyone.
        
           | moron4hire wrote:
           | It's what my graphics courses in my computer science degree
           | did. The skills I learned have benefitted me long after (20
           | years), and I mostly "just" do web development.
        
           | delta_p_delta_x wrote:
           | Personally... I disagree. The course says:
           | 
           | > You'll learn how a software 3D engine works under the hood,
           | and ... write a complete _software rasterizer_ from scratch
           | 
           | Writing a software rasteriser is a _fantastic_ way to learn
           | the classic graphics pipeline, Every _single_ aspect of said
           | pipeline offers scope for one to learn more about how GPUs
           | work, and the algorithms behind them. This would be immensely
           | educational to a new graphics developer.
           | 
           | Vertex processing, including fast, efficient file parsing,
           | vertex data layout and storage, and optimisation.
           | 
           | Fast primitive assembly from vertices, including line-drawing
           | and interpolation algorithms.
           | 
           | Texture mipmapping, and mapping.
           | 
           | The rasterisation phase itself offers tons of opportunity,
           | from Bresenham's line drawing algorithm to supersampling,
           | tiled rendering, parallelisation, and efficient memory
           | layouts for cache locality.
           | 
           | Clipping, culling, hidden-surface removal, _z_ -buffering,
           | and stencil tests.
           | 
           | Various other algorithms that are taken for granted with a
           | pre-existing graphics pipeline, like texture lookup, vector
           | reflection, environment mapping.
           | 
           | Post-processing and miscellaneous algorithms like anisotropic
           | filtering, multi-sample anti-aliasing, temporal anti-
           | aliasing, and even deferred rendering.
        
             | charcircuit wrote:
             | If you want to learn webdev you don't start by building a
             | web browser despite the fact that building a web browser
             | would teach you a lot.
        
               | delta_p_delta_x wrote:
               | Web dev is sufficiently abstracted from the browser and
               | the hardware that a skilled web developer doesn't really
               | need to know the internals of the V8 engine or how a
               | browser works. A web back-end developer probably doesn't
               | need to deal with the browser at all.
               | 
               | Graphics programming skill (by 'skill', I mean being able
               | to write a shader pipeline that both satisfies the art
               | direction _and_ performs well), on the contrary, is _very
               | deeply tied_ to a good understanding of hardware.
               | Especially now that the new APIs (Vulkan, Metal, D3D12)
               | are _purposely_ less abstracted than their predecessors.
        
               | charcircuit wrote:
               | All programmers at a certain level must understand the
               | lower levels of the stack they are building upon. I think
               | most people can get by with just learning the
               | abstractions that are provided to them. With time people
               | can learn more and more about lower levels as they
               | specialize. Most people who become graphics programmers
               | don't need to know the low levels of how a GPU works, so
               | I personally do not think that is a good place to start
               | for people who want to get into graphics programming.
        
               | achierius wrote:
               | Are you a graphics programmer, or are you just saying
               | this because that's true in the CPU world (where you have
               | experience)? From both my own and my colleagues'
               | experiences, I would 100% disagree: the nature of GPU
               | architectures means you are forced to deal with super
               | low-level details from the very beginning. Part of this
               | is because graphics (and GPGPU compute) programming is
               | inherently performance-constrained: if it wasn't, you
               | could just do your work on the CPU with much less of a
               | headache. Even just generally though, the path to running
               | code on a GPU is much simpler -- the hardware does less
               | work for the programmer, there's no operating system to
               | manage threads, everything's a single binary, etc. --
               | which means that there's less to insulate you from what's
               | underneath.
        
               | charcircuit wrote:
               | >Are you a graphics programmer, or are you just saying
               | this because that's true in the CPU world (where you have
               | experience)
               | 
               | Neither.
               | 
               | >the nature of GPU architectures means you are forced to
               | deal with super low-level details from the very
               | beginning.
               | 
               | Not everyone is trying to push the hardware to its limits
               | by making the best thing possible with the hardware.
               | Plenty of projects can get along fine with a A/AA
               | renderer or with unoptimized shaders.
               | 
               | >there's no operating system to manage threads
               | 
               | There literally is. Or if you disagree the firmware
               | manages the threads.
        
               | signaru wrote:
               | A more realistic comparison for webdev is using plain JS
               | vs using frameworks.
        
           | throwawee wrote:
           | Agreed. I've seen beginners fall into the trap of spending
           | way too much time on software rendering that won't be useful
           | or performant later because every platform they'll ever
           | develop for has hardware accelerated rendering. I learned how
           | to painstakingly render scenes pixel by pixel and then had to
           | unlearn it because graphics programming doesn't work that way
           | anymore.
           | 
           | Teaching beginners to render graphics without the GPU is like
           | teaching them to do fractional math without the FPU. It won't
           | make their first project better and gives them the wrong idea
           | of what to expect.
        
           | virtualritz wrote:
           | >> No GPU, no OpenGL, no DirectX!
           | 
           | > This is the opposite of what you would hope to see for
           | learning graphics programming.
           | 
           | It is exactly what you would hope to see.
           | 
           | And I would dare say I'm a graphics programmer, mostly self
           | taught.
           | 
           | When I started, as a teenager, in the late 80's, there were
           | no GPUs. So I learned everything from first principles. I
           | recall implementing Bresenham in x86 assembly for my CGA
           | card. And then, as a follow up, rasterizing a triangle. You
           | really had to understand stuff end-to-end then as the
           | hardware was so slow. I.e. even C was too slow for that
           | stuff.
           | 
           | And today, still, the best offline renderers, that produce
           | the images you see on the big screen, are CPU-only. 100%
           | custom code, no 3rd party API dependency.[1]
           | 
           | If you write stuff for CAD/CAM/CAE/VFX, there is a big chance
           | you do not think about the constraints of a GPU and less so
           | of one of the APIs used to program it. Except for previewing
           | stuff.
           | 
           | I would suggest to anyone learning graphics programming (or
           | anything else for that matter) to do so from first
           | principles.
           | 
           | GPUs are specialized hardware for realtime applications. That
           | is very specific. I don't say don't learn that. But I suggest
           | to not start with it.
           | 
           | [1] One of my best friends is the lead developer of the
           | 3Delight renderer.
        
             | crq-yml wrote:
             | My little spot of nuance on this reply would be that the
             | point of starting in software would not be to aim for a
             | fast or feature-rich implementation. That seems to be the
             | sticking point of the replies in favor of going hardware-
             | first - that "we don't optimize the same way anymore". But
             | nobody was asking about optimizing! People seem to go
             | performance-brained when they talk about graphics because
             | they read about John Carmack once. Then they go right back
             | to their Javascript frameworks.
             | 
             | Like any student engineering project, "baby's first
             | rasterizer" would emphasize a combination of concepts and
             | motions - a path to take, to get to a result that can be
             | tested. We don't even have to use Bresenham now - deriving
             | the rasterized line from linear interpolation is
             | mathematically more sound and no sweat for modern CPUs. But
             | it might be pedagogically useful to compare the two to
             | explain quality vs performance tradeoffs, ones that were
             | made historically and those that are still in use today.
        
         | starmole wrote:
         | I find this very interesting. I have been toying around how I
         | would design a graphics 101 course myself. Should it start with
         | putpixel style sdl code like we did in the 90s? Or start with
         | shadertoy? Of course basic matrix math is always important. But
         | how to teach rasterizing a triangle? Can we skip to homogenous
         | coordinates and quad trees without going through scan lines?
         | Should we really teach Phong shading or can we move straight to
         | BRDFs? Some parts might be teaching "old hacks" instead of
         | relevant skills. Statistics and sampling are way more important
         | today. I believe graphics is getting more "mathy" every year.
         | So learn math, teach math.
        
           | corysama wrote:
           | I started with putting pixels in MCGA to CPU rasterize phong
           | shaded triangles, and I don't recommend it.
           | 
           | Instead, I'd recommend
           | 
           | https://learnopengl.com/ https://raytracing.github.io/books/R
           | ayTracingInOneWeekend.ht...
           | https://fgiesen.wordpress.com/2011/07/09/a-trip-through-
           | the-... https://foundationsofgameenginedev.com/
           | https://youtu.be/j-A0mwsJRmk
           | 
           | Though, if you really do want to put pixels, this is how you
           | should do it: https://gist.github.com/CoryBloyd/6725bb78323bb
           | 1157ff8d4175d...
        
         | rmshin wrote:
         | Just wanted to second this recommendation. I did the course a
         | few months ago with near-zero baseline in graphics programming
         | (though a few years' experience as a standard swe), and it gave
         | me a pretty decent grasp of how 3d shapes get drawn on the
         | screen. Afterwards I was able to pick up webgpu in a matter of
         | days, which I don't think would've been possible without the
         | understanding I gained from the course.
         | 
         | If anyone's looking for motivation, I made a wasm-compiled demo
         | of the renderer you produce by the end -
         | https://rmshin.github.io/3d-renderer-wasm
        
         | torginus wrote:
         | There are a couple of excellent resources out there for
         | implementing 3D rendering from scratch.
         | 
         | On that I cannot recommend enough is this github repo:
         | 
         | https://github.com/ssloy/tinyrenderer/wiki/Lesson-0:-getting...
         | 
         | If you are more of a visual learner, this guy is also a
         | treasure trove:
         | 
         | https://www.youtube.com/watch?v=ih20l3pJoeU
        
           | ggambetta wrote:
           | Wow, that first link is fantastic, thanks for sharing!
        
         | ggambetta wrote:
         | My website/book/course does the same thing, and it's freeeeeee!
         | https://www.gabrielgambetta.com/computer-graphics-from-scrat...
        
       | adamnemecek wrote:
       | wgpu, the Rust WebGPU implementation is the bee's knees.
       | https://wgpu.rs/ You can use it beyond the web.
        
         | CyberDildonics wrote:
         | You think the first thing for someone who wants to learn
         | graphics programming is to compile rust's webgpu
         | implementation?
        
           | jholman wrote:
           | Obviously not. Obviously the first thing for someone who
           | wants to learn graphics programming is to learn Rust. I'm
           | surprised you asked.
        
           | adamnemecek wrote:
           | It's actually relatively small.
        
             | CyberDildonics wrote:
             | So what? You don't think they might need to learn math or a
             | graphics API or fundamentals first? You think they need to
             | compile a rust webgpu implementation first because it's
             | small?
             | 
             | How does this make any sense?
        
               | TheRoque wrote:
               | You can learn along the way. If you learned with the
               | famous learnopengl website, you learn about compiling
               | glad, glfw and putting up a C++, along with maths, all at
               | the same time, incrementally... The tutorials for Rust's
               | wgpu isn't any different.
        
               | adamnemecek wrote:
               | What do you think wgpu is if not a graphics API?
        
               | CyberDildonics wrote:
               | You can use it in a browser without compiling anything.
               | Have you ever taught someone from scratch? You don't
               | invent nonsense rabbit holes and barriers to entry to
               | make things harder.
        
           | TheRoque wrote:
           | WebGPU is a pretty good starting point, that's what I did
           | myself (with C++, not Rust though, which should be even more
           | straightforward). You can even use it in the browser and skip
           | all the native hassle.
           | 
           | Just learn the basic concepts like buffers, drawing, texture,
           | light, perspective etc. from https://learnopengl.com/ then
           | you can jump into WebGPU. Even though there's not that many
           | WebGPU tutorial, applying the OpenGL tutorials to it is
           | pretty straightforward once you understand the fundamentals.
        
             | CyberDildonics wrote:
             | Using webgpu makes sense, but that isn't what they said.
             | They said compiling a rust webgpu implementation.
        
               | TheRoque wrote:
               | Since this is very straightforward to do, I took it as
               | equivalent statements
        
               | CyberDildonics wrote:
               | You think writing javascript fragments in a browser that
               | is already installed is exactly the same as learning to
               | compile rust then compiling a third party webgpu project
               | in it?
               | 
               | That's not even graphics programming and everyone already
               | has something that works, how are they at all the same
               | thing?
        
       | atum47 wrote:
       | I'm on that journey myself. Two years ago I followed several
       | tutorials and youtube videos to create my first 3D engine. It's
       | very simple, but I like simple stuff. Right now I'm working on
       | using this engine to create a city builder game [1]. It is a lot
       | of fun to learn to manipulate stuff using matrix and cross
       | products.
       | 
       | 1 - https://www.youtube.com/watch?v=cvyrfPUpyp0
        
       | byyoung3 wrote:
       | program graphics
        
       | demondemidi wrote:
       | Not really sure what this website is asking. Does the person want
       | to: Rig? Texture? Model? Write drivers? Make GUIs? Animate
       | websites? Make graphics tools? Work with shaders? Or work with 2D
       | photo engines? Make 2D games? Make 3D games? Write procedural
       | scripts? Optimize graphics code?
       | 
       | There are hundreds of disciplines that fall under "computer
       | graphics". The website focuses on a teensy little corner:
       | programming graphics SDKs.
        
         | quelsolaar wrote:
         | Being a graphics programmer is a pretty well defined category
         | of programmers. A lot of the things you mentioned aren't
         | graphics programming like Rig, Texture or Model, and a graphics
         | programmer is expected to be able to run the gamut of games,
         | 3D, 2D, tools, shaders and optimization.
        
         | hshsbs84848 wrote:
         | I can see how it's confusing but usually "graphics programmer"
         | is someone who works on a graphics rendering engine (either
         | real time or offline rendering)
        
       | trevortheblack wrote:
       | Since this is on the front page I guess I'll post the resource
       | that the graphics programming industry actually uses (note, I'm
       | an author): https://raytracing.github.io/
       | 
       | It's included in the "Useful Websites" in the article above.
       | 
       | Also note that graphics is large enough that there no longer
       | exists a one-size-fits all solution to learning graphics. If you
       | want to learn graphics I'd recommend finding a mentor.
        
         | starmole wrote:
         | Great stuff! I especially agree with teaching math via
         | raytracing first instead of APIs.
        
       | quelsolaar wrote:
       | C is a perfectly good alternative to C++ for Graphics
       | programming. OpenGL and Vulcan are C APIs and DirectX has a C
       | wrapper. Some games like Call of duty are written in C, but C++
       | is more common.
        
         | mabster wrote:
         | I've done games in both. My favourite is mostly imperative C++
         | so you can use templates.
        
       | TrackerFF wrote:
       | Learning trigonometry and linear algebra would be a good start.
        
       | quelsolaar wrote:
       | As a graphics programmer i think its good to have a well rounded
       | idea about how graphics works. Here are some things I would
       | expect a good graphics programmer to know, beyond just
       | programming and an API:
       | 
       | -Rotation view and projection matrices, and general vector math.
       | 
       | -Shader programming.
       | 
       | -Procedural primitives like voronoi, SDF and perlin.
       | 
       | -Image Compositing.
       | 
       | -Forward and deferred rendering.
       | 
       | -Various sampling techniques.
       | 
       | -Shadow and lighting techniques.
       | 
       | -Knowing a bit about how the art pipeline works and how to get
       | data out of 3D apps.
       | 
       | -Being comfortable using a profiler and debugger.
       | 
       | -Capable of reading Siggraph papers.
       | 
       | -Knowing various spacial partitioning and volume hierarchy
       | techniques.
       | 
       | -Being able to build a simple raytracer.
       | 
       | -Good understanding of primitives, like sprites, triangles n-gons
       | and so on.
       | 
       | -Some particle and simulation experience.
        
         | rajangdavis wrote:
         | Where do you learn this stuff?
        
           | rustybolt wrote:
           | Googling it works pretty well
        
           | quelsolaar wrote:
           | A good start would be the book Real-Time Rendering:
           | https://www.realtimerendering.com/
        
           | account42 wrote:
           | A lot of these things are covered in computer science
           | lectures if you pick the right ones. But you can also pick
           | them up yourself by being curious how things work and reading
           | papers or looking at existing game/engine source code. There
           | are also many online resources as the sibling posters point
           | out. Having at least some basic linear algebra or programming
           | education makes things easier to understand.
        
           | dahart wrote:
           | School, then work, is the most traditional route. If you're
           | past that point and want to learn, the best use of that list
           | is to use it as a self-learning syllabus and set of search
           | terms, so you can start to practice.
        
         | SomeDaysBe wrote:
         | Do you know how one can get a job as a graphics developer if
         | they know most of these? I do Graphics programming as a hobby,
         | and I learned a lot of what you mentioned here. But since I
         | don't have any job experience, I never really get any
         | interviews.
        
           | Tiktaalik wrote:
           | Always hard to get a foot in the door into a new role when
           | one doesn't have direct company experience in the role on a
           | resume.
           | 
           | The best thing I can think of would be to work on some
           | independent projects (eg. some mod of a game, some indie game
           | or graphics related app itself) and have that work be part of
           | a portfolio.
           | 
           | The other approach that I've seen work is to get into a games
           | company as it is significantly expanding and make it known
           | that you want to do rendering work in the future. At an old
           | company I was at a Gameplay Programmer pivoted to being a
           | Graphics Programmer as the graphics team needed more help and
           | was expanding, and I see via linked in that he's continued in
           | that path.
        
       | zffr wrote:
       | For complete beginners looking to get a good conceptual
       | understanding of what shaders are actually doing, I would highly
       | recommend this course: https://github.com/ssloy/tinyrenderer
       | 
       | In the course you will build a purely CPU-based renderer that
       | simulates the way OpenGL shaders work. I found it to be
       | incredibly useful for understanding shaders.
        
       | barbariangrunge wrote:
       | If you're teaching yourself, it might be nice to have an early
       | win. Something like unreals material editor or shader graph might
       | be a nice start. Lots of tutorials around. Then, when you go to
       | write actual shaders. You'll know what logically you intend to
       | do, and all you have to do is learn the syntax of whatever
       | graphics language/api you choose
       | 
       | Tangent: Learning the syntax for OpenGL is hellish, and there's a
       | lack of great resources on it, at least as of several years ago.
       | 
       | Then, after you understand shaders a little, go and make your
       | game engine (physics will be its own beast)
        
       | amelius wrote:
       | 10 GRAPHICS 8+16:REM HIRES MODE WITHOUT TEXT WINDOW         15
       | SETCOLOR 0,0,0:SETCOLOR 2,10,15         20 COLOR 1         30
       | PLOT 0,0         40 DRAWTO 319,190         50 GOTO 50
        
       | SuboptimalEng wrote:
       | You can start learning graphics by writing shaders on Shadertoy.
       | It's where tons of graphics programmers get their start.
       | 
       | Shameless self promotion, I've made 10+ tutorials going over
       | topics like: how to write shaders in VS Code, SDFs, ray marching,
       | noise functions, fractional brownian motion, etc.
       | 
       | https://github.com/suboptimaleng/shader-tutorials
       | 
       | I'm certainly standing on the shoulders of giants like Inigo
       | Quilez, The Art of Code, SimonDev and Acerola.
        
       | drones wrote:
       | Are there any good resources for learning graphics programming
       | for the first time with WebGPU? I have heard mixed opinions about
       | WGSL.
        
       | raytopia wrote:
       | If you want to be a retro graphics programmer reject shaders and
       | return to glBegin.
        
         | jherico wrote:
         | I'm sure there's a set of shaders that will basically let you
         | emulate the fixed function pipeline without having to endure
         | the shitty performance implications of passing a vertex list to
         | the GPU every single frame.
        
         | account42 wrote:
         | You can use vertex buffers or at least vertex arrays without
         | glBegin lol.
        
       | dboreham wrote:
       | Put one pixel in front of another..
        
       | jheriko wrote:
       | make things. the end.
        
       | photochemsyn wrote:
       | All the comments here are BS. You have to learn how to construct
       | 2D and 3D graphical spaces in real time and let them evolve and
       | I'm sorry to say, as others have before, there is no royal road
       | to mathematics. You simply have to put your time in the trenches.
       | All the tools being sold to you are ephemeral and will soon
       | become obsolete, so you simply have to grasp fundamentals of
       | linear algebra and 3D spatial representations.
       | 
       | You can try to use tools created by others but the results will
       | all just look the same as theirs, and if this is too much bother
       | I'd suggest going back to non-digital mediums for your artwork
       | like pen & ink watercolors oil paints etc.
       | 
       | If you can't handle vector calculus and complex analysis then
       | that's too bad. Try harder or do something else.
        
         | wly_cdgr wrote:
         | Ok but what are you saying? After understanding the math, are
         | you suggesting that people need to use that understanding to
         | interface with the graphics hardware at a level below Vulkan?
        
         | unconed wrote:
         | Lol, the only math you need for practical graphics coding is
         | linear algebra. Calculus only comes into play when doing e.g.
         | light integrals, and complex analysis only when you start doing
         | advanced convolutions.
         | 
         | The math isn't the problem, the absolutely abysmal API design
         | is. The current generation of graphics APIs were designed for
         | game engines to build middleware on top, not for practical use,
         | and it shows.
         | 
         | Too many hardcoded limits, too many different ways to do the
         | same thing, too many caveats and exceptions... and not enough
         | decent developer tools to see wtf is going on on the other
         | side.
        
           | eurekin wrote:
           | > the absolutely abysmal API design is
           | 
           | Oh.. Now I get why others see AI could improve things here :)
        
       | uglycoyote wrote:
       | I'm a game developer but not specifically a graphics programmer.
       | Although I work with modern graphics APIs and GLSL shaders in my
       | day job, when my 13 year old recently graduated from wanting to
       | program in Scratch or Python to wanting to learn C++, I decided
       | the best thing to do was break out the old OpenGL 1.2 DLL's that
       | I still had on my machine since 1999 and starting him writing
       | some code using glut and glbegin/glvertex/glend type of immediate
       | programming.
       | 
       | it is just a lot more fun than trying to suffer through all of
       | the setup that one needs to do with modern APIs. he is more
       | interested in computational geometry type of things like voronoi
       | diagrams so the graphics API is really just a means to an end and
       | fancy shaders and lighting aren't important right now, and
       | performance in C++ and old school OpenGL is about a thousand
       | times faster than Scratch, so I think we hit a sweet spot for
       | where he is at in terms of his progression of learning.
       | 
       | even with the simplified API of OpenGL 1.2, he is still biting
       | off a pretty ambitious chunk of learning to try to grasp c++ at
       | the same time as OpenGL, so the simplicity helps keep it sane and
       | manageable, and things are going well. He did some neat marching
       | squares demos and I helped add an IMgui menu to tune parameters
       | at runtime. it has been entertaining!
        
       | slalomskiing wrote:
       | I went through this and switched from web dev to being a graphics
       | programmer at a game studio
       | 
       | Personally I think LearnOpenGL is still the best tutorial series
       | if you want to work on game rendering engines
       | 
       | Just because it covers a ton of different topics from beginner up
       | to advanced all in one series. You go from scratch all the way up
       | to implementing deferred rendering and a full PBR implementation
       | 
       | If you understand all of those tutorials you have a pretty good
       | baseline for how modern game rendering works (minus ray tracing)
       | 
       | In terms of getting a job though the modern APIs are highly
       | desirable
        
       | wly_cdgr wrote:
       | The details are outdated now, I guess, but I don't know if
       | there's any more inspiring reading for an aspiring graphics
       | programmer than Fabian Giesen's "A trip through the graphics
       | pipeline" https://fgiesen.wordpress.com/2011/07/09/a-trip-
       | through-the-...
       | 
       | As for a place to actually start, I think Computer Graphics From
       | Scratch is a good place https://www.gabrielgambetta.com/computer-
       | graphics-from-scrat.... A great thing about the book is it gives
       | you many ideas for where to go next once you work your way
       | through to the end.
        
         | Agentlien wrote:
         | The details are surprisingly still relevant. A lot has changed,
         | but most of that (compute shaders, Ray tracing hardware, ...)
         | is actually orthogonal to everything mentioned in this article
         | or simply details changing in aspects glossed over.
         | 
         | I still link people this article because I have yet to find
         | anything which does a better job explaining these things.
        
       | dahart wrote:
       | Oh there are so many more ways to become a graphics programmer
       | than by starting with DX/VK/OGL. No need to use C++ at all. Look
       | at all the amazing 3d graphics 3Blue1Brown does in Python.
       | 
       | Learn DirectX or Vulkan if you want to write game engines.
       | 
       | Learn WebGL if you want to write browser applications.
       | 
       | Those APIs are heavy though, and don't even necessarily teach you
       | that much about graphics on their own. If you want to learn
       | graphics concepts, write your own rasterizer and ray tracer -
       | both! - in any language you want.
       | 
       | There are also a bunch of super-easy-to-use graphics libraries &
       | coding environments that are so much more fun than slogging
       | through Vulkan or DX. Processing is wonderful. Or checkout
       | PlotDevice.io (Python), or its predecessors NodeBox, or DrawBot.
       | ShaderToy is another place where you can learn how to write
       | shaders, or lots and lots about rendering, and it's so easy to
       | get started. JavaScript has lots of options and libraries. These
       | can be way more accessible and motivating to a beginner, but
       | still offer enough power and flexibility to take the curious
       | student as far as they want.
        
       | MurageKabui wrote:
       | Not one mention of GDI+ ?
        
       | atoav wrote:
       | Depends on what you want to make and on which level. Doing
       | graphics for a desktop application will differ from doing it for
       | a web application, for embedded devices or for a game.
       | 
       | That being said for a beginner I would recommend starting with
       | the examples in processing.org. This is basically a small IDE
       | where you can make your code draw things in a window with minimal
       | boilerplate (the boilerplate is hidden). There is a heap of
       | examples, how to do easing functions, how to work with vectors,
       | etc.
       | 
       | The stuff learned there will be useful for anything where you
       | need to work with a coordinate system.
        
       | Animats wrote:
       | I have the horrible feeling that, if you start to become a
       | graphics programmer now, by the time you're done, AI will be
       | doing it.
       | 
       | If you want to make games, learn a game engine. The big 3D ones
       | are Unity (not too hard), Unreal Engine (hard, used for AAA
       | titles), and Bevy (in Rust, open source.) There are a ton of 2D
       | engines, mostly used to do retro stuff.
        
         | eurekin wrote:
         | AI will be doing graphics programming? Please elaborate, I have
         | a very hard time understanding how current clunky models that
         | cannot generatie a java class without "// here you fill in the
         | implementation" everywhere would go about improving current PBR
         | shader code for example
        
           | FoodWThrow wrote:
           | > AI will be doing graphics programming?
           | 
           | I doubt that. I think the bigger concern (if you could call
           | it that) is that certain AI techniques may be able to replace
           | entire stacks once they get advanced/stable enough. Something
           | along the lines of this perhaps, where you render simplified
           | geometry and let the AI fill in the blanks:
           | https://www.youtube.com/watch?v=P1IcaBn3ej0
           | 
           | Even without experimental technologies, you can get a glimpse
           | of how shipped tools like ray reconstruction can morph the
           | field in the future, forever entrenching themselves into
           | graphics programming one way or another.
           | 
           | As far as AI _writing_ graphics code? No way, at least not in
           | the next couple of decades. Button snippets are a far cry
           | from rendering virtualized geometry 60 times a second.
        
         | account42 wrote:
         | I expect AI will consume game design jobs long before low level
         | graphics programming.
         | 
         | > The big 3D ones are Unity (not too hard), Unreal Engine
         | (hard, used for AAA titles), and Bevy (in Rust, open source.)
         | 
         | Bevy is pretty much unheard of outside the rust community and
         | also quite new (= hasn't yet proven itself). Godot would be the
         | engine most likely to win the third spot.
         | 
         | I would recommend against investing time learning Unity because
         | the company is clearly insane if they though they could get
         | away with charging developers of existing games per install
         | [0].
         | 
         | [0] https://news.ycombinator.com/item?id=37481344
        
           | Animats wrote:
           | Agree about Godot. Bevy is moving up but not there yet.
           | 
           | Also, the suggested developer machine for UE5 with everything
           | turned on is expensive. 64GB of RAM and a graphics card with
           | a price over $1000 are needed to build and run the Matrix
           | Awakens demo. And it will take hours to build. The result is
           | a playable game that looks like a Hollywood-grade movie. You
           | have the sources and can mod it.
           | 
           | Unity's change to their business model has annoyed everybody
           | in the industry.
        
       | ribit wrote:
       | IMO the best way to start for a beginner is with Metal. It's a
       | delightfully streamlined API that avoids the idiosyncratic
       | complexity of DX and Vulkan, it's very quick to pick up if you
       | already have some C++ experience, the tooling is excellent. Most
       | importantly, you can focus on GPU programming instead of fighting
       | the API and you get to play with all the fancy tools (mesh
       | shading, ray tracing) with a very low barrier of entry. And all
       | that you learn with Metal translates directly to other APIs.
        
       | 127 wrote:
       | I really don't consider getting familiar with whatever minutia of
       | C++/Vulkan throws at you graphics programming. It just presents
       | unnecessary barrier that may change in the future without you
       | learning any of the core skills that are purely algorithm and
       | math based. When you learn a low-level API to talk to the GPU, or
       | an antiquated horror-show of a language such as C++, you're not
       | actually doing any graphics programming. Get into shaders (also
       | compute) ASAP.
       | 
       | getting started:
       | 
       | https://www.khanacademy.org/computing/pixar (super basics)
       | 
       | https://iquilezles.org/articles/
       | 
       | https://www.shadertoy.com/
       | 
       | youtube/gamedev:
       | 
       | https://www.youtube.com/@Acerola_t
       | 
       | https://www.youtube.com/@crigz
       | 
       | research:
       | 
       | https://www.cs.cmu.edu/~kmcrane/
       | 
       | http://rgl.epfl.ch/people/wjakob
       | 
       | https://www.nvidia.com/en-us/research/
        
       | frakt0x90 wrote:
       | Blender's shader graph and geometry nodes seem like a fun/easy
       | intro that I've been meaning to try. May not be 'programming' in
       | the literal sense of typing text on the screen to produce
       | graphics, but I'm hoping it would give a conceptual graph of how
       | to produce the images I want that I could later transform into
       | code if needed. Please correct me if I'm wrong as I haven't dove
       | in.
        
       | akkad33 wrote:
       | Is there any point in learning in new programming skills when AI
       | can do it all? I feel discouraged about learning new skills in
       | this era
        
       ___________________________________________________________________
       (page generated 2023-11-23 23:02 UTC)