[HN Gopher] JEDEC publishes GDDR7 graphics memory standard
       ___________________________________________________________________
        
       JEDEC publishes GDDR7 graphics memory standard
        
       Author : ksec
       Score  : 74 points
       Date   : 2024-03-05 18:57 UTC (4 hours ago)
        
 (HTM) web link (www.jedec.org)
 (TXT) w3m dump (www.jedec.org)
        
       | paddy_m wrote:
       | How much relevance does JEDEC still have?
       | 
       | I would think NVIDIA in particular and other chip
       | makers/integrators like apple make up their own standards now. It
       | also seems less relevant because memory is rarely interchangeable
       | anymore
        
         | sliken wrote:
         | Do you have any evidence that Nvidia or Apple are using non-
         | standard memory chips? Link? From what I can tell they are both
         | using standard chips, but very wide memory interfaces. Apple's
         | lowest end is 128 bit, but offer 256, 512, and 1024 bit wide
         | memory interfaces for more bandwidth, which is mostly a benefit
         | for the iGPU in all of Apple's m series CPUs. This is part of
         | why Apple are pretty good at LLMs, especially those needing
         | more ram than is in even the most expensive GPUs.
         | 
         | Sad that the vast majority of x86-64 laptops and desktops have
         | the same bus width of decades ago, while the core counts are
         | ever increasing.
        
           | jsheard wrote:
           | Nvidia and Micron came up with GDDR6X, which isn't a JEDEC
           | standard. JEDEC did standardize GDDR5X before that, but only
           | Micron ever made it and only Nvidia ever used it AFAIK.
        
           | zeusk wrote:
           | JEDEC never "standardized" GDDR6x that Nvidia does use;
           | Micron and Nvidia worked closely on both GDDR6x and GDDR5x
        
         | hedgehog wrote:
         | Apple and NVIDIA are both members of JEDEC...
        
         | monocasa wrote:
         | HBM is a JEDEC spec these days. Apple's on package memory is
         | still LPDDR.
        
         | ajross wrote:
         | JEDEC isn't a memory technology monopoly or anything, they're
         | just a standards organization. You have a situation where lots
         | of companies need to make products that interoperate, but
         | interoperation is complicated in electrical engineering. There
         | are a *lot* of ways to get a memory interconnect wrong.
         | 
         | So the solution is to pick (or create) a separate, notionally
         | independent body staffed and supported by representatives of
         | all the relevant stakeholders, and have them write "standards"
         | that everyone agrees to adhere to. The body doesn't invent the
         | technology, that happens at the individual chip companies. They
         | then present their proposals to JEDEC[1] and everyone argues
         | and agrees on what will go into GDDR19 or whatnot. And JEDEC
         | then publishes the standards for all to see.
         | 
         | [1] Or whoever, JEDEC does DRAM, but there's a USB Consortium,
         | Bluetooth SIG, WiFi is under IEEE, etc...
        
         | pillusmany wrote:
         | Standardized memory chips allows economy of scale to work.
        
         | Dalewyn wrote:
         | >How much relevance does JEDEC still have?
         | 
         | Ah, a FOSSy dev insistent that his bits are _gibi-_ and _mebi-_
         | and _kibi-_ bytes.
        
       | sliken wrote:
       | Doubling the channels from GDDR6 sounds good, the speed of light
       | isn't changing, so at least we can handle more parallelism with
       | the same latency.
        
         | oorza wrote:
         | The ratio of light speed to the area of the universe is so
         | stupidly small that I'm convinced our simulation is determining
         | how low the speed of light can get before interstellar travel
         | is outright impossible.
        
           | adtac wrote:
           | With sufficiently advanced technology, travelling between
           | stars will be more of a transfer of your consciousness using
           | interstellar WiFi (remember to set TCP_NODELAY!) rather than
           | transporting slow and heavy atoms. You just get transferred
           | from one biological substrate to another. None of the time
           | dilation, all of the space exploration.
        
             | Phelinofist wrote:
             | But that would still be limited by the speed of light,
             | right?
        
               | adtac wrote:
               | Of course, everything is. Doubling the speed of light
               | means your network packets get there twice as fast, but
               | accelerating matter to relativistic speeds, which too is
               | limited by the speed of light, has less marginal utility
               | from the doubling when it comes to energy needed for
               | acceleration/deceleration and time dilation.
        
               | stronglikedan wrote:
               | Maybe quantum entanglement, where the original "portals"
               | would be set up around the universe at the speed of
               | light, but then data could henceforth be transferred
               | between the portals at the speed of entanglement.
        
               | omneity wrote:
               | You're thinking of latency vs bandwidth/throughput. You
               | might not improve on the latency part (speed of light),
               | but you can increase the bandwidth (amount of data
               | transferred per unit of time) just like a highway with
               | more lanes can carry more people without increasing the
               | individual speed of cars. You might even _decrease_ car
               | speed and still get an improved throughput overall.
        
             | parl_match wrote:
             | Yeah, but you still need to get some bodies out there.
        
             | WJW wrote:
             | > all of the space exploration.
             | 
             | How does the receiving technology get built? Surely at
             | least someone will have to go there the first time, and
             | they will have to take the long way. It will still be quite
             | a problem to get to a system 10k light years away.
        
               | kraquepype wrote:
               | My inner sci-fi geek tells me that by this time, we
               | discover faster than light travel, only it isn't
               | compatible with life as we know it.
               | 
               | So we ship off these receivers to circumvent that
               | limitation. Instead of travelling ourselves, we can send
               | off our consciousness to inhabit a human-life analog to
               | explore.
               | 
               | What that does to your psyche, and your body in limbo,
               | are probably good material for a story, if it hasn't
               | already been written.
        
               | cyanydeez wrote:
               | My inner geek tells me it's more likely humans will plug
               | themselves into the matrix because it'll be far more
               | receptive to technological advances than actual
               | exploration.
               | 
               | At best, you'll throw a bunch of nanoprobes everywhere to
               | get new entropy into the system.
        
               | cstrahan wrote:
               | You just have to first hack (or maybe even just ask
               | nicely) another suitable species (or their technological
               | artifacts) wherever you want to go, and have them create
               | the biological substrate and download/upload mechanism on
               | their end. This limits travel to already inhabited
               | corners of the universe, but that's better than nothing I
               | suppose.
               | 
               | The tricky thing is that hacking is usually an iterative
               | process, and these iterations are going to be an extreme
               | exercise in patience.
               | 
               | Actually, another tricky thing: how do you know that the
               | other end is actually cooperating? If the aliens are
               | dicks they could give you the thumbs up while having zero
               | intention to reconstitute your consciousness. If you
               | wanted to round-trip some brave soul as a means of
               | verifying everything works, they could just send one of
               | their own minds back instead, just for the fun of
               | wreaking havoc.
        
               | riskable wrote:
               | > The tricky thing is that hacking is usually an
               | iterative process, and these iterations are going to be
               | an extreme exercise in patience.
               | 
               | No kidding! On the first try you accidentally end up
               | causing a revolution because the targets/specimens ended
               | up learning about the scientific method, gunpowder, and
               | other dangerous things instead of just getting a proper
               | advanced consciousness installed. So now all you can do
               | is try to shape said species technological progress
               | towards building the correct technology that you can
               | hijack for your own purposes when ready.
               | 
               | "Just be patient"
        
               | adtac wrote:
               | Is YC accepting applications for interstellar body rental
               | stations like Hertz is for cars? I'd bootstrap it, but I
               | think this requires venture scale funding.
        
               | colejohnson66 wrote:
               | Well, Hertz is selling off their whole Extraterrestrial
               | Vehicle (EV) fleet, so it's probably not profitable
               | enough for VCs.
        
             | TaylorAlexander wrote:
             | I'm of the view that you can possibly duplicate
             | consciousness but you can never send "me". I'm stuck on the
             | consciousness I've got. If you tried to upload my
             | consciousness somewhere I'd still be sitting here like "hey
             | look there's another one of me", but I'd not experience
             | some shift in perspective myself.
        
               | foobarian wrote:
               | I find this a scary topic, like touching a hot stove. Try
               | as hard as I can but I can't figure out (and overall
               | nobody has so far) how the "self" experience works.
        
               | adtac wrote:
               | No, you _are_ your consciousness. The self exists only in
               | the story the mind tells itself, so both versions would
               | think they are the original you.
               | 
               | Besides, the serialisation process is a form of quantum
               | measurement. Depending on how coarse-grained it is, there
               | might be no way to take a snapshot without modifying you
               | (maybe the measurement process turns the original brain
               | matter into soup).
        
               | pricecomstock wrote:
               | They would think they are the original you, but I think
               | the GP was saying that the original perspective would
               | continue on the original consciousness-
               | continuity/body/hardware.
               | 
               | Cloning a hard drive can produce the same data, but
               | without any networking, there's no reason for the
               | original machine to know anything from the perspective of
               | the new one
        
             | joshspankit wrote:
             | Who says we're not doing that already and just calling it
             | dreams?
        
             | bitwize wrote:
             | In elementary school I read a kids' novel, called _My Trip
             | to Alpha I_ , that had precisely this as a McGuffin. The
             | main character travels to visit his aunt and uncle by
             | "Voya-Code", in which his consciousness is transmitted to
             | an android body on the destination planet.
        
             | drtgh wrote:
             | >will be more of a transfer of your consciousness
             | 
             | Sounds like a copy, not a transfer. If you didn't
             | physically transport the atoms, the matter, you would end
             | up with two duplicates living at different places and time,
             | and with different ways of thinking after the copy, as the
             | living experiences will diverge from that moment.
             | 
             | This unless you exterminate the original with each copy.
             | Also should be considered each copy may lose information,
             | degrade (signal integrity through distance, number of
             | travels, and so on).
        
           | hnuser123456 wrote:
           | Or is it the ratio of your lifespan to the age of the
           | universe? The universe is only about 3x bigger
           | instantaneously than it is when transversed at lightspeed.
           | The ratio of the age of the universe to your expected
           | lifespan is about 8 orders of magnitude.
        
           | pillusmany wrote:
           | Our solar system deployed a black domain for deterrence.
        
           | ko27 wrote:
           | "Ratio of light speed to the area of the universe" does not
           | determine how far _you_ can travel in a set amount of time,
           | because time dilation exists.
        
         | znpy wrote:
         | Just checking my intuition: could we still get a speedup via
         | pipelined execution and branch prediction?
         | 
         | From what i see reading https://en.wikipedia.org/wiki/Multi-
         | channel_memory_architect... different channels could, in
         | theory, be used "autonomously of each other".
        
           | Lramseyer wrote:
           | Controllers kind of do that. At the end of the day, it's what
           | makes designing a memory controller so difficult (and I'm not
           | even talking about the Phy, those things are straight up
           | cursed!) We see these eye popping numbers for maximum
           | potential bandwidths, but the reality is a bit more
           | complicated. There's a lot that goes on behind the scenes
           | with opening and closing memory banks, refreshes, and general
           | read and write latencies. Unoptimized prediction algorithms
           | (as they are programmable) can result in losing _half_ of
           | your performance.
        
         | colechristensen wrote:
         | >the speed of light isn't changing
         | 
         | Oh but the speed of the signal does depend quite a lot on the
         | transmission medium. In Cat-6 signals travel 2/3 _c_. Can 't
         | find a quick reference for on-die or motherboard kinds of
         | interconnects. If you had optical interconnects traveling
         | through vacuum in a silicon chip, that's a full 50% faster (as
         | in lower travel time for one bit over a distance) than Most
         | copper ethernet.
         | 
         | https://en.wikipedia.org/wiki/Velocity_factor
        
       | Scene_Cast2 wrote:
       | One interesting thing to note is that all the high speed
       | interconnect standards (GDDR, PCIE, USB, Ethernet) are moving (or
       | have already moved) to RF-style non-binary communication (as
       | opposed to bumping up clock speeds or increasing pin count). I
       | wonder what the next steps will be for interconnects - full-blown
       | baseband style transceivers with QAM perhaps?
        
         | wmf wrote:
         | Optical communication is using QAM so that's probably the next
         | step.
        
         | klysm wrote:
         | Sharp signal edges are hard to get at higher frequencies - it
         | seems quite natural
        
         | SmellTheGlove wrote:
         | This is probably a stupid question but I don't claim any
         | knowledge here. Even though interconnect standards are moving
         | to non-binary communication, doesn't the signal eventually need
         | to be converted back to a bitstream at its destination? Does
         | this just push the bottleneck around, or do I just not
         | understand the problem being solved? It's almost certainly the
         | latter and I'd love to understand more.
        
           | kevvok wrote:
           | You're right that the signals have to be converted back into
           | bits at the destination. Basically, this solves the problem
           | of pumping those bits at high speed across traces on a
           | circuit board vs within a chip. The longer a signal has to
           | go, the harder it is to maintain its integrity.
        
           | wmf wrote:
           | A serializer/deserializer (serdes) is used to convert between
           | high-speed serial I/O outside the chip (e.g. 100 Gbps) and
           | lower-clocked parallel signals inside the chip (e.g. 64 bits
           | at 1.5 GHz). Using serial protocols reduces the cost and
           | thickness of cables while parallel wires are cheaper inside
           | the chip.
        
           | ak217 wrote:
           | Not a stupid question - you can think of the problem by
           | analogy with RF engineering. You have very high performance
           | digital logic and precise clocks on the chip that you can use
           | to encode/decode (convolve/deconvolve) bits into waveform
           | signals and time those signals before they leave the chip at
           | minimal latency/power expense. Once the bits are off the
           | chip, you have no such resource and are dealing with all
           | kinds of impedance and noise issues, which is why there are
           | separate circuits/logic dedicated to training and calibration
           | of the encoding parameters of the signals sent over the wire
           | in DRAM chips.
           | 
           | This more complex encoding scheme is just the next level in
           | that process, indeed moving it closer to techniques used in
           | RF engineering.
        
       | imtringued wrote:
       | Unless Nvidia can somehow massively increase memory capacity, it
       | is looking bleak for them in the consumer AI inference space.
       | From the left Apple has a fully integrated SoC with insane memory
       | bandwidth and capacity, from the right AMD is tackling the FLOPs
       | advantage using their XDNA AI Engines that they got from their
       | Xilinx acquisition and they are going to open source the compiler
       | for their AI Engines. The only competitive advantage that NVidia
       | has left is its high memory bandwidth, but even that is being
       | threatened by Strix Point so they will need to adopt GDDR7 with
       | 32 GB to 64GB VRAM fast or they will become irrelevant except for
       | training. Oh and by the way AMD GPUs will stay completely
       | irrelevant for AI so that explains why they didn't want to waste
       | so much time on ROCm for consumer GPUs. Nobody is going to buy
       | those for AI anymore by the end of the year.
        
         | rubatuga wrote:
         | AMD was the first to introduce consumer HBM cards.
        
         | IshKebab wrote:
         | I mean... CUDA. nVidia is fine.
        
         | wmf wrote:
         | It's not clear that the "consumer (local) AI inference space"
         | is a real market. Ultimately Nvidia has access to all the same
         | technologies as their competitors and better software so
         | anything they can do Nvidia can do better... if they want to.
        
       ___________________________________________________________________
       (page generated 2024-03-05 23:00 UTC)