[HN Gopher] AMD openSIL open source firmware proof of concept
___________________________________________________________________
AMD openSIL open source firmware proof of concept
Author : wmf
Score : 98 points
Date : 2023-06-14 19:37 UTC (3 hours ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| gary_0 wrote:
| This sounds relevant to what the Oxide Computer guys are doing
| (the Cantrill Crew, the Keepers of the Solaris Flame, etc).
| They're building AMD EPYC servers open-source style, and it
| sounded like they were making some headway in getting AMD to
| allow alternatives to the "proprietary vendor blobs" way of doing
| hardware.
| JonChesterfield wrote:
| Sounds great! I don't know what this is but there's a linked page
| at https://community.amd.com/t5/business/empowering-the-
| industr... which looks like a reasonable place to find out. Edit:
| I still don't know what this is. Could someone who does leave a
| comment to enlighten the rest of us?
| wmf wrote:
| This is code to initialize an Epyc server processor. If you
| combine openSIL and EDK2 you can create a completely open
| source "BIOS" for a server.
| seanw444 wrote:
| Another solid move by AMD. Due to their attitude towards open-
| source in the GPU department, I haven't used an Nvidia graphics
| card in years. Their APUs kill pretty much all the competition.
| Their CPUs are excellent for price/performance, efficiency, and
| core-count. And now this? If they can prioritize getting more
| AI/ML and 3D modeling capability into their cards, going all-red
| would become the new clear meta.
| alanfranz wrote:
| If.
|
| If they can proritize _and_ execute on that vision.
| slt2021 wrote:
| there is also second mover advantage, where they just need to
| copy-cat the best part of nvidia's ML stack and dont waste
| time figuring out what works/what doesn't work.
|
| they will get users simply because they will be alternative
| to nvidia. Add good pricing strategy and it can be immediate
| success
| msla wrote:
| > there is also second mover advantage, where they just
| need to copy-cat the best part of nvidia's ML stack and
| dont waste time figuring out what works/what doesn't work.
|
| Seems like patents would stop that.
| bmicraft wrote:
| Didn't the Google v Oracle lawsuit end up confirming that
| you can't patent an API?
| slt2021 wrote:
| AMD doesnt have to implement CUDA api, they just need to
| make sure their compute framework works well with
| pytorch/tf/MLIR or whatever high level framework is being
| used.
|
| Cuda itself will change over time, so no reason for AMD
| to pick cuda, because nobody writes CUDA kernels by hand,
| they use high level frameworks
| slt2021 wrote:
| didn't know nvidia patented matmul and dot product
| m00x wrote:
| There's a lot more involved than matmul and dot products.
| fatfingerd wrote:
| I'm actually happier if they deliver a mediocre GPU
| performance seamlessly. I would be as annoyed as the gamers
| to be priced out of a great desktop/video setup. If hype
| more fully occupied Nvidia's fragile race cars I'd have a
| nicer laptop.
| justinjlynn wrote:
| With Ryzen and their other recent products and their rapid
| pace of improvement, they certainly have developed a decent
| track record of that.
| m00x wrote:
| They're crushing it on hardware, but being left behind in
| the firmware/software.
| justinjlynn wrote:
| Getting Intel proprietary advantage in widely used open
| source packages and systems is something Intel has been
| very, very good at doing. I use POWER9 systems and...
| Yeah, the software porting situation has been rather
| annoying (especially in media processing libraries that
| rely on things like embree, opencolorio and the like)
| gdevenyi wrote:
| One word.
|
| CUDA
| justinjlynn wrote:
| Valve's Proton/wine... Apple's M5 cpu. Software/hardware
| layers, especially system software and hardware interfaces
| that must remain stable and compatible essentially forever,
| aren't the strong lock in mechanisms everyone thinks they
| are.
| m00x wrote:
| That absolutely doesn't work in ML training.
| jjoonathan wrote:
| It absolutely does, but AMD's execution on this front has
| been unmitigated dogshit for the last decade and now
| every engineer in this niche has a scar or five from
| giving AMD chance after chance after chance and then
| limping back to NVDA when the slog is just too much.
|
| Now that they have money, I'm hoping they can turn this
| around. I hear that they have... but I've always heard
| that and it has never been true. I need to see to
| believe, now.
| Pet_Ant wrote:
| Are you referring to OpenCL?
| wongarsu wrote:
| ML training sounds like the easiest part. All that would
| be needed on that front would be AMD engineers writing
| good AMD backends for pytorch and tensorflow. A much
| simpler task than offering an optimized general-purpose
| interface and getting people to use it (whether that's
| optimized OpenCL, a CUDA-compatible API or something
| else).
|
| Of course so far AMD has done a laughably bad job at
| these kinds of things. I guess there's hope
___________________________________________________________________
(page generated 2023-06-14 23:00 UTC)