[HN Gopher] The Snapdragon 855's iGPU
___________________________________________________________________
The Snapdragon 855's iGPU
Author : ingve
Score : 172 points
Date : 2024-05-02 07:58 UTC (15 hours ago)
(HTM) web link (chipsandcheese.com)
(TXT) w3m dump (chipsandcheese.com)
| jimmySixDOF wrote:
| Nice. Should probably include the year in the title (2019?) but
| it's interesting to me because even the latest 2024 XR2+ Gen2 is
| based on the Snapdragon 865 with almost the same iGPU which may
| mean there is a technical benefit (vs just extra cost) from not
| using the newer 888s
| helf wrote:
| The year is... He posted it yesterday lol.
| crest wrote:
| Did you read the date of publication of the article benchmarks
| or did you just blindly assume the author published them on the
| day the embargo on publishing benchmarks for the chip fell?
| TehCorwiz wrote:
| I'm not disputing your suggested date, but I can't find it
| anywhere in the article. Can you explain where you got it?
| adrian_b wrote:
| The article is new, only the tested CPU+GPU is from 2019.
|
| Snapdragon 855 was indeed the chipset used in most flagship
| Android smartphones of 2019. I have one in my ASUS ZenFone.
|
| Based on the big ARM cores that are included, it is easy to
| determine the manufacturing year of a flagship smartphone.
| The cheaper smartphones can continue to use older cores. Arm
| announces a core one year before it becomes used in
| smartphones. Cortex-A72 => 2016
| Cortex-A73 => 2017 Cortex-A75 => 2018 Cortex-A76
| => 2019 (like in the tested Snapdragon 855) Cortex-A77
| => 2020 Cortex-X1 => 2021 Cortex-X2 => 2022
| Cortex-X3 => 2023 Cortex-X4 => 2024
| jamesy0ung wrote:
| I didn't realise the cores in the Pi 5 were 4 years old
| already. Surely if some vendor like qualcomm or mediatek
| released a sbc with decent software and recent cores, they
| could sweep the floor.
| nsteel wrote:
| They are 16nm, so yeh, "old". Newer tech means a newer
| node and considerably more expensive. At which point, why
| are you buying an SBC over some small intel/amd thing?
| netbioserror wrote:
| iGPUs have climbed high enough that they are overpowered for
| "normal" tasks like video watching and playing browser games.
| They can even do 1080p 60 FPS gaming to a decently high standard.
| And we've already proven via ARM that RISC architectures are more
| than ready for everything from embedded to high-power compute.
| What happens when iGPUs get good enough for 1440p 120 FPS high-
| end gaming? Game visuals have plateaued on the rasterization
| front. Once iGPUs are good enough, nobody will have much reason
| to get anything other than a tiny mini PC. The last frontier for
| GPUs will be raw supercomputing. Basically all PCs from there on
| out will just be mini machines.
|
| The next few years will be very interesting.
| jsheard wrote:
| Memory bandwidth is the major bottleneck holding iGPUs back,
| the conventional approach to CPU RAM is absolutely nowhere near
| fast enough to keep a big GPU fed. I think the only way to
| break through that barrier is to do what Apple is doing - move
| all of the memory on-package and, unfortunately, give up the
| ability to upgrade it yourself.
| resource_waste wrote:
| The problem is there is still a limit. There is a reason we
| have GPUs.
|
| I genuinely can't figure out if Apple has a point, or is
| doing something quirky so they can overcharge outsiders.
|
| Given 0 other companies are really investing resources and
| Nvidia is the giant, I'd guess its the later. I see too many
| reddit posts about disappointed CPU LLM users who wanted to
| run large models.
| jsheard wrote:
| > I genuinely can't figure out if Apple has a point, or is
| doing something quirky so they can overcharge outsiders.
|
| Well, Apple was already soldering the memory onto the
| motherboard in most of their Intel machines, so moving the
| memory on-package didn't really change their ability to
| gouge for memory upgrades much. They were already doing
| that.
| faeriechangling wrote:
| Apple using a 512-bit memory bus on its Max processors is
| indeed the future of APUs if you ask me.
|
| AMD is coming out with Strix Halo soon with a 256bit memory
| bus, on the high end we're also seen the niche ampere
| platform running Arm CPUs with 576-bit buses. PS5 uses a
| 256-bit bus and Series X a 320-bit bus, but they're using
| GDDR instead of DDR which increases costs and latency to
| optimise for bandwidth but there's no reason you couldn't
| design a laptop or steam deck that did the same thing. AMD
| has their MI300X which is using 192gb HBM3 over a 8192-bit
| bus.
|
| I don't think it's just apple going this way, and I do
| think that more and more of the market is going to be using
| this unified approach instead of the approach of having a
| processor and coprocessor separated over a tiny PCIe bus
| with separate DDR/GDDR memory pools. With portable devices
| especially, more and more every year I struggle to see how
| the architecture is justified when I see the battery life
| benefits of ridding yourself of all this extra hardware.
| LLM inferencing also creates a nice incentive to go APU
| because the dual processor architecture tends to result in
| excessive bandwidth and anemic capacity.
|
| Nvidia may be the giant yet if you look at what most gamers
| actually run on, they run on Apple APUs, Snapdragon APUs,
| and AMD APUs. PCs with a discrete CPU and separate Nvidia
| GPUs have become a relatively smaller slice of the market
| despite that market segment having grown and Nvidia having
| a stranglehold upon it. The average game developed is not
| developed to run on a separate CPU and GPU monster beast -
| they're designed to run on a phone APU - and something like
| a Max processor is much more powerful than required to run
| the average game coming out.
| talldayo wrote:
| > Nvidia may be the giant yet if you look at what most
| gamers actually run on, they run on Apple APUs,
| Snapdragon APUs, and AMD APUs.
|
| They really don't? You're trying quite hard to conflate
| casual gaming with console and PC markets, but they
| obviously have very little overlap. Games that release
| for Nvidia and AMD systems almost never turn around and
| port themselves to Apple or Snapdragon platforms. I'd
| imagine the people calling themselves gamers aren't
| referring to their Candy Crush streak on iPhone.
|
| > something like a Max processor is much more powerful
| than required to run the average game coming out.
|
| ...well, yeah. And then the laptop 4080 is some 2 times
| faster than _that_ :
| https://browser.geekbench.com/opencl-benchmarks
| faeriechangling wrote:
| I mean I see tons of shooters (PubG, Fortnite, CoD, etc)
| and RPGs (Hoyaverse) seeing both mobile and PC releases
| and they're running on the same engine under the hood.
| I'm even seeing a few indie games being cross platform.
| Of course some games simply don't work across platforms
| due to input or screen size limitations, but Unity/Unreal
| are more than 3/4s of the market and can enable a release
| on every platform so why not do a cross-platform release
| if it's viable?
|
| I just see the distinction you're drawing as being
| arbitrary and old-fashioned and misses the huge rise of
| midcore gaming which is seeing tons of mobile/console/pc
| releases. I understand that a TRUE gamer would not be
| caught dead playing such games, but as more and more
| people end up buying APU based laptops to play their
| hoyaverse games, that's going to warp the market and
| cause the oppressed minority of TRUE gamers to buy the
| same products due to economies of scale.
| talldayo wrote:
| I don't even think it's a "true gamer" thing either.
| Besides Tomb Raider and Resident Evil 8, there are pretty
| much no examples of modern AAA titles getting ported to
| Mac and iPhone.
|
| The console/PC release cycle is just different. _Some_
| stuff is cross-platform (particularly when Apple goes out
| of their way to negotiate with the publisher), but _most_
| stuff is not. It 's not even a Steam Deck situation where
| Apple is working to support games regardless; they simply
| don't care. Additionally, the majority of these cross-
| platform releases aren't quality experiences but gatcha
| games, gambling apps and subscription services. You're
| not wrong to perceive mobile gaming as a high-value
| market, but it's on a completely different level from
| other platforms regardless. If you watch a console/PC
| gaming showcase nowadays, you'd be lucky to find even a
| single game that is supported on iOS and Android.
|
| > so why not do a cross-platform release if it's viable?
|
| Some companies do; Valve famously went through a lot of
| work porting their games to MacOS, before Apple
| depreciated the graphics API they used and cut off 32-bit
| library support. By the looks of it, Valve and many
| others just shrug and ignore Apple's update treadmill
| altogether. There's no shortage of iOS games I played on
| my first-gen iPod that are flat-out depreciated on
| today's hardware. Meanwhile the games I bought on Steam
| in 2011 still run just fine today.
| faeriechangling wrote:
| I just don't get this obsession with the idea that only
| recently-released "AAA" games are real games (or that the
| only TRUE gamers are those who play them) and it seems
| like the market and general population doesn't quite
| grasp it either. These FAKE gamers buy laptops too, and
| they probably won't see the value in a discrete GPU.
|
| Besides, it's ultimately irrelevant because when Strix
| Halo comes out, it's going to have the memory bandwidth
| and compute performance to be able to play any "AAA" game
| released for consoles until consoles refresh around
| ~2028, which is 4 solid years of performance before new
| releases will really make them struggle. These APUs won't
| be competing with the 4080, but instead the 4060, which
| is a more popular product anyways. Discrete GPUs are in
| an awkward spot where they're not going to be
| significantly more future proof than an APU you can buy,
| but will suck more power, and will likely have a higher
| BOM to manufacture.
|
| If you asked TRUE gamers if gaming laptop with Nvidia
| GPUs were worth it a few years ago, when they were
| already the majority of the market, they would have
| laughed in your face and pointed out how they didn't play
| the latest AAA games good and thus TRUE gamers won't buy
| them and to instead buy a cheap laptop paired with a big
| desktop.
| talldayo wrote:
| > I just don't get this obsession with the idea that only
| recently-released "AAA" games are real games
|
| It's really the opposite; I think obsessing over casual
| markets is a mistake since casual gaming customers are
| incidental. These are people playing the lowest-common-
| denominator, ad-ridden, microtransaction-laden apps that
| fill out the App Store, not _Halo 2_ or _Space Cadet
| Pinball_. It really doesn 't matter when the games came
| out, because the market is always separated by more than
| just third-party ambivalence. Apple loves this
| unconscious traffic, because they will buy _any_ garbage
| they put in front of them. Let them be gorged on Honkai
| Star Rail, while Apple counts 30% of their expenses on
| digital vice.
|
| Again, I think it's less of a distinction between "true"
| and "casual" gamers, but more what their OEM encourages
| them to play. When you owned feature phones, it was
| shitty Java applets. Now that you own an iPhone... it's
| basically the same thing with a shinier UI and larger
| buttons to enter your credit card details.
|
| I'll just say it; Apple's runtime has to play catch-up
| with stuff like the Steam Deck and even modern game
| consoles. The current piecemeal porting attempts are
| pathetic compared to businesses a fraction their size.
| Even Nvidia got more people to port to the Shield TV, and
| that was a failure from the start.
| hamilyon2 wrote:
| Is the best-selling game on every platform Minecraft
| casual or hardcore? What about Heartstone which allows
| you to play as much as you like, even once a year (and
| win), competitive and addictive: choose three.
|
| It is as if the casual true dichotomy is a false one.
| Dalewyn wrote:
| >The average game developed is not developed to run on a
| separate CPU and GPU monster beast - they're designed to
| run on a phone APU
|
| I _love_ how performant mobile games are on desktop
| /laptop hardware assuming good ports, _Star Rail_ and
| _Princess Connect! Re:Dive_ for some examples.
|
| This will probably go away once mobile hardware gets so
| powerful there's no requirement for devs to be efficient
| with their resource usage, as has happened with
| desktop/laptop software, but god damnit I'll enjoy it
| while it lasts.
| resource_waste wrote:
| >Nvidia may be the giant yet if you look at what most
| gamers actually run on, they run on Apple APUs,
| Snapdragon APUs, and AMD APUs.
|
| Wat
|
| This is like extremely incorrect.
|
| Its Nvidia.
|
| Yikes, your technoblabble at the start seemed like smart
| people talk but the meat and potatoes is off base.
| ProfessorLayton wrote:
| Is it incorrect though? Mobile alone is 85B of the 165B
| market [1], not to mention that Nintendo's Switch is
| basically an android tablet with a mobile chipset.
|
| [1] https://helplama.com/wp-
| content/uploads/2023/02/history-of-g...
| sudosysgen wrote:
| To be fair, an Nvidia mobile chipset.
|
| Also, much of mobile gaming isn't exactly pushing the
| envelope of graphics, especially by revenue.
| resource_waste wrote:
| Yes, we arent using mobile gaming as an indicator of GPU
| growth/performance.
|
| This seems like some way to be like 'Well tecknicaklly',
| to justify some absurd argument that doesnt matter to
| anyone.
| ProfessorLayton wrote:
| >Yes, we arent using mobile gaming as an indicator of GPU
| growth/performance.
|
| Who's "we" because big tech has absolutely been touting
| GPU gains in their products for a _long time_ now [1],
| driven by gaming. Top of the line iPhones can do
| raytracing now, and are getting AAA ports like Resident
| Evil.
|
| In what world is being over half of a 185B industry a
| technicality?. A lot of these advancements on mobile end
| up trickling up to their laptop/desktop counterparts (See
| Apple's M-series), which matters to non-mobile gamers as
| well. Advancements that wouldn't have happened if the
| money wasn't there.
|
| [1] https://crucialtangent.files.wordpress.com/2013/10/ip
| hone5s-...
| kllrnohj wrote:
| > Apple using a 512-bit memory bus on its Max processors
| is indeed the future of APUs if you ask me.
|
| It's very expensive to have a bus that wide, which is why
| it's so rarely done. Desktop GPUs have done it in the
| past ( https://www.techpowerup.com/gpu-
| specs/?buswidth=512%20bit&so... ), but they all keep
| pulling back from it because it's too expensive.
|
| Apple can do it because they can just pay for it and know
| they can charge for it, they aren't really competing with
| anyone. But the M3 Max is also a stonking _huge_ chip -
| at 92bn transistors it 's significantly bigger than the
| RTX 4090 (76bn transistors). Was a 512-bit bus _really_ a
| good use of those transistors? Probably not. Will others
| do it? Probably also no, they need to be more efficient
| on silicon usage. Especially as node shrinks provide less
| & less benefit yet cost increasingly more.
| faeriechangling wrote:
| 512-bit is probably a bit extreme, but I can see 192-bit
| and 256-bit becoming more popular. At the end of the day,
| if you have a high-end APU, having a 128-bit bus is
| probably THE bottleneck to performance. It's not clear to
| me that it makes sense or costs any less to have two
| 128-bit buses on two different chips which you see on a
| lot of gaming laptops instead of a single 256-bit bus on
| one chip for the midrange market.
|
| M3 pro only used a mere 37 billion transistors with a
| 192-bit bus, so you can get wider than 128-bit while
| being economical about it. I'd love for there to be a
| 512-bit Strix Halo but it probably won't happen, it
| probably does not make business sense.
|
| I don't know if the comparison to GPUs necessarily tracks
| here because the price floor of having 8 chips of GDDR is
| a lot higher than having 8 chips of DDR.
| nsteel wrote:
| But isn't that GDDR option something like 3-4x the
| bandwidth? So you'd fit far fewer chips to hit the same
| bandwidth.
| 15155 wrote:
| I would rather be in a world where we have skilled hot-air
| rework shops in major cities (like Shenzhen has) that can
| cheaply do this upgrade.
|
| The speed of light isn't changing, nor is EMI. Upgradability
| is directly at odds with high-speed, RF-like, timing-
| sensitive protocols. "Planned obsolescence!!!" - no:
| electrical reality.
| bluGill wrote:
| The upgrade would be replace the iGPU including the memory
| on board.
|
| Upgrades are not really environmentally good. They might be
| the best compromise, but the parts you take off still are
| discarded.
| throwaway11460 wrote:
| But it's still better than discarding the whole thing,
| isn't it?
| paulmd wrote:
| Why discard anything at all? If the laptop is
| sufficiently durable it should be able to endure multiple
| owners over its functional lifespan.
| throwuxiytayq wrote:
| It often takes a very slight upgrade to give the hardware
| a whole new life. Plenty of macbooks with 8GB of RAM out
| there.
| talldayo wrote:
| Well one day the SSD will fail, and then your options are
| as follows:
|
| 1) reflow the board with a steady hand
|
| 2) bin it
|
| The overwhelming majority of customers will not choose
| option 1.
| doctorpangloss wrote:
| I am pretty sure cooling is holding back performance on
| iPhones and other cellphones though. It throttles after about
| 5m of typical high end 3D gameplay. That's how it can be much
| faster than a Switch including using faster RAM than it but
| not really useful as a game console.
| papruapap wrote:
| I dont think. AMD is releasing a 256b width soon, we will how
| it performs.
| ryukafalz wrote:
| If so I would like to see more of what the MNT Reform[0] is
| doing. You can have your CPU, GPU, and memory all on a single
| board while still making that board a module in a reusable
| chassis. There's no reason the rest of the device needs to be
| scrapped when you upgrade.
|
| [0] https://mntre.com/modularity.html
| kanwisher wrote:
| We keep making more intensive games and applications that tax
| the gpu more. LLMs have already kicked off another 5 year cycle
| of upgrades before they run full speed on consumer hardware
| citizenpaul wrote:
| Isn't one of the issues with local LLM the huge amount of GPU
| memory needed? I'd say that we'll go a lot longer than 5
| years before phones have 24+GB of VRAM.
| jerlam wrote:
| It's very questionable whether consumers want to spend the
| money to run LLMs on local hardware; nor do companies want
| them to, instead of charging a subscription fee.
|
| High-end PC gaming is still a small niche. Every new and more
| powerful gaming console does worse than the previous
| generation. VR hasn't taken off to a significant degree.
| testfrequency wrote:
| You know, I agreed with this years ago and spent $4,000 on a
| SSFPC based on Intel NUC...only to have Intel drop NUC
| entirely. While NUC has integrated graphics, the major benefit
| was pairing it with a full sized GPU.
|
| Now I'm stuck with an old CPU, and have little reason to
| upgrade the GPU.
|
| I guess my main hope is it's modular and seamless to upgrade
| small frame machines in the future, so I can keep a case
| standard for generations.
| netbioserror wrote:
| It's my hope as well that maybe a new upgradeable standard
| for mini-PCs comes about. But keep in mind, Linux doesn't
| care much if the hardware configuration changes. One could
| conceivably just move their SSD and other drives to a new
| mini PC and have a brand new system running their old OS
| setup. If only Microsoft were so accommodating. The added
| benefit being how useful a mini PC can be over an old tower.
| Just buy a cheap SSD and: Give it to grandma; turn it into a
| NAS; use it as a home automation hub; use it as a TV media
| center; any number of other possibilities that a small form
| factor would allow. All things that were _possible_ with an
| old tower PC, but far less convenient. And now that we 're on
| a pretty stable software performance plateau (when we don't
| throw in browsers), newer chips can conceivably last much
| longer, decades, even.
| Dalewyn wrote:
| >If only Microsoft were so accommodating.
|
| You can use sysprep[1], though nowadays you don't even need
| to do that most of the time.
|
| [1]: https://learn.microsoft.com/en-us/windows-
| hardware/manufactu...
| Dalewyn wrote:
| >What happens when iGPUs get good enough for 1440p 120 FPS
| high-end gaming?
|
| The video card goes the down the same road as the sound card.
|
| Given how expensive video cards have gotten thanks first to
| cryptocurrency mining and now "AI", it's only a matter of time.
| saidinesh5 wrote:
| What makes it even more interesting is more and more people are
| starting to enjoy AA games more than AAA games for various
| reasons .. (cost, more experimental/novel game play
| mechanics/story)
|
| It feels like the next few years would be really glorious for
| handheld gaming...
| nothercastle wrote:
| Basically every AAA game has been bad recently. Only
| interesting games are in the AA and indie categories
| wiseowise wrote:
| > What happens when iGPUs get good enough for 1440p 120 FPS
| high-end gaming?
|
| Hell yeah, we celebrate.
| MrBuddyCasino wrote:
| > _What happens when iGPUs get good enough for 1440p 120 FPS
| high-end gaming?_
|
| They won't with the mainstream number of memory channels, where
| iGPUs are severely bandwidth starved. AMD Strix Halo wants to
| change that, but then the cost difference to a discrete GPU
| gets smaller.
|
| They might cannibalise even more of the low-end market, but I'm
| not sure they will make mid-range (or high-end) dGPUs obsolete.
| fidotron wrote:
| For the near future actually using this power does cause a
| notable battery hit. We reached the point a while back with
| mobile 3D where you actively have to hold yourself back to stop
| draining the battery as so many devices will keep up frame rate
| wise to a surprising degree.
|
| Mobile phones have taught us another annoying lesson though:
| edge compute will forever be under-utilized. I would go so far
| as to say now a huge proportion of the mainstream audience are
| simply not impressed by any advances in this area, the noise is
| almost entirely just developers.
| callalex wrote:
| Increased edge compute has brought a lot of consumer features
| if you take a step back and think about it. On-device,
| responsive speech recognition that doesn't suck is now
| possible on watches and phones without a server round trip,
| and a ton of image processing such as subject classification
| and background detection is used heavily. Neither of those
| applications were even possible 5-7 years ago and now even
| the cheap stuff can do it.
| ethanholt1 wrote:
| I was able to get Counter-Strike 2 running on an iGPU of a
| fourth gen Radeon chip. Don't remember the name, but I got
| barely 10FPS and it was unplayable, but it ran. And it was
| amazing.
| msh wrote:
| CS2 runs fine on a steam deck with its iGPU
| beAbU wrote:
| Cloud gaming + a monthly subscription is probably going to be
| the end result. Inwon't be suprised if the next xbox generation
| is a streaming stick + wireless controller.
|
| The writing is on the wall for me. A cheap streaming device + a
| modest games subscription will open the platform up to many who
| are currently held back by the cost of the console.
| byteknight wrote:
| At the detriment of experience.
| hollerith wrote:
| Hasn't that been tried by well-funded companies like Google
| starting over 10 years ago and failed?
| masterj wrote:
| Yes, but sometimes ideas come ahead of the infrastructure
| that would enable them, in this case fiber rollouts and
| geographically distributed data centers with GPUs. I've
| been really impressed by GeForce Now lately and XBox Cloud
| Gaming is a thing.
| beAbU wrote:
| Yes and? Does that mean it can never be tried again?
|
| If car makers gave up with EVs 30 years ago, and never
| tried improving, where would we be today? Because EVs are
| not the ICE replacement today, does that mean it won't be
| in 30 years' time?
|
| All I'm saying is that the natural outcome for gaming is
| cloud + subscription. Maybe not the next xbox console, but
| possibly the one after next.
|
| This was the inevitable outcome of music and video. Gaming
| is next.
|
| We are going to own nothing and we will love it.
| 0x457 wrote:
| Stadia was doomed to fail:
|
| - You have to buy all the games all over again
|
| - people, that are familiar with google ways, didn't want
| to invest anything into stadia
|
| - stadia controller, while nice, didn't want with anything,
| but stadia until recently (still doesn't work with ATV for
| some reason)
|
| - Google own devices didn't support Stadia for absolutely
| no reason (you could have side-loaded the app, and it
| worked just fine)
|
| xCloud always felt much better than Stadia when playing.
| nothercastle wrote:
| GeForce now works great when it works but they are under
| provisioned in capacity so there are times when their data
| center works really poorly.
|
| The biggest issues is probably long load times. A suspend
| state to drive function would be a big boost.
|
| Then it's probably the oversold capacity.
|
| Then it would be only 80% compatibility with all titles.
| maxglute wrote:
| More mobile desktop mode plz.
| Salgat wrote:
| 8k-16k is the breakpoint where pixel density is high enough to
| no longer need anti-aliasing, so we have a ways to go, but once
| we hit that, we still have a long ways to go in rendering
| quality. Screenshots can look really nice, but dynamic visuals
| are still way behind realistic graphics you'd expect to see in
| movies.
| hbn wrote:
| Don't worry, no matter how good hardware gets, you can rest
| assured bad software will push it to the limits and further
| push the minimum system requirements to run anything as basic
| as a note taking app or music player.
|
| https://en.wikipedia.org/wiki/Andy_and_Bill%27s_law
| banish-m4 wrote:
| Sigh. The problem with fast hardware is it enables ever worse
| software development techniques. Perhaps we need 80286-level
| hardware performance levels again and then unlimited
| performance for 6 months once every decade.
| taskforcegemini wrote:
| but this is still far from good enough for VR
| refriedbeans wrote:
| Even today, the 4090 with path tracing cannot achieve 1440p at
| 120 fps in Cyberpunk, and the 1% lows are not great. Also,
| high-resolution VR will require much more performance.(Unless
| we manage to get foveated rendering working effectively)
| stefan_ wrote:
| The same as always, we crank up the resolution? Sorry to say
| but your 1080p and 1440p is already a few years behind, so "the
| next few years" aren't terribly exciting.
| hyperthesis wrote:
| There's still real-time ray-tracing.
|
| Cosmetic physics can eat up more, but as a gameplay mechanic,
| user interaction with realistic physics are too unpredictable
| (beyond a gimmick). Of course, we could then get actual
| computer sports.
| kllrnohj wrote:
| > Qualcomm's OpenCL runtime would unpredictably crash if a kernel
| ran for too long. Crash probability goes down if kernel runtimes
| stay well below a second. That's why some of the graphs above are
| noisy. I wonder if Qualcomm's Adreno 6xx command processor
| changes had something to do with it. They added a low priority
| compute queue, but I'm guessing OpenCL stuff doesn't run as "low
| priority" because the screen will freeze if a kernel does manage
| to run for a while.
|
| Very few (if any) mobile class GPUs actually support true
| preemption. Rather they are more like pseudo-cooperative, with
| suspend checks in between work units on the GPU. Desktop GPUs
| only got instruction-level preemption not _that_ long ago -
| Nvidia first added it with Pascal (GTX 10xx), so mobile still
| lacking this isn 't surprising. It's a big cost to pay for a
| relatively niche problem.
|
| So the "crash" was probably a watchdog firing for failing to make
| forward progress at a sufficient rate and also why the screen
| would freeze. The smallest work unit was "too big" and so it
| never would yield to other tasks.
| kimixa wrote:
| > Desktop GPUs only got instruction-level preemption not that
| long ago
|
| And even then it's often limited to already scheduled shaders
| in the queue - things like the register files being statically
| allocated at task schedule time means you can't just "add" a
| task, and removing a task is expensive as you need to suspend
| everything, store off the (often pretty large) register state
| and any used shader local data (or similar), stop that task and
| deallocate the shared resources. It's avoided for good reason,
| and even if it's supported likely a rather untested buggy path.
|
| If you run an infinite loop on even the latest Nvidia GPU (with
| enough instances to saturate the hardware) you can still get
| "hangs", as it ends up blocking things like composition until
| the driver kills the task. It's still nowhere near the
| experience CPU task preemption gives you.
| darksaints wrote:
| I've been wanting to do an AskHN for this, but seeing as the
| Snapdragon855 is directly related to my question, I'm hoping
| someone here has some insights that could point me in the right
| direction. I'm currently developing a specialized UAV, and I've
| gotten to the point where I've got MCU-based FC hardware almost
| running that is based off of open source Pixhawk designs. But the
| long term plan is to introduce a variety of tasks that rely on
| computer vision and/or machine learning, and so the general
| approach is to go with a companion computer. It seems the default
| choice for things like this in the prototype stage is a raspberry
| Pi or Jetson module, but I would really like to develop the
| entire system as a single module with an eye toward serial
| production, which means I'm looking at embedding an SOC, as
| opposed to wiring up an external module. And that has led me to
| look at the Snapdragon SOCs, in large part due to the GPU
| capabilities. But now I'm getting into unfamiliar territory,
| because I can't do ordering/research via digikey like I have done
| with everything else up until now.
|
| When we're talking about new development, what are the
| expectations that I should have for direct procurement of higher
| end components (this goes for Sony/Samsung camera sensors too)?
| Are there typically very large minimum purchase quantities? Do I
| have to already have a large production output before I can even
| talk to them about it? Is it possible to get datasheets even if
| I'm not at that stage yet?
|
| I get the feeling I'm stuck with the Jetson + Off-the-shelf
| camera approach until I can demonstrate the ability to mass
| produce and sell my existing designs, but it would be nice if I
| could find out more about how that is supposed to play out.
| echoangle wrote:
| I would stick to mounting SoMs on your PCB until you have a
| clear need for something else, integrating a SoC onto a PCB
| isn't trivial, you need to take a lot of care of signal paths
| and RF properties.
| user_7832 wrote:
| Disclaimer, I'm not an expert in this field.
|
| If you're hoping to commercialize and are a 1-man company, I
| hope you have a good USP/feature that's beyond "it's cheaper
| than the competition". If you do have that, I'd say just get an
| MVP out while keeping systems as platform agnostic as you can.
| Doesn't matter if you need an extra Jetson, if your product is
| good enough the extra cost shouldn't really deter customers.
|
| Once you've started, hire/contract folks and streamline it out.
|
| BTW Dev kits do exist for the snapdragon 8 gen 2/3 I think, but
| they're about $700 last I checked. If you're handy you could
| even try to reuse a rooted phone, if that's feasible for you.
___________________________________________________________________
(page generated 2024-05-02 23:00 UTC)