[HN Gopher] Nvidia's hot adapter for the GeForce RTX 4090 with a...
       ___________________________________________________________________
        
       Nvidia's hot adapter for the GeForce RTX 4090 with a built-in
       breaking point
        
       Author : elorant
       Score  : 220 points
       Date   : 2022-10-28 16:34 UTC (6 hours ago)
        
 (HTM) web link (www.igorslab.de)
 (TXT) w3m dump (www.igorslab.de)
        
       | badkitty99 wrote:
        
       | TheRealPomax wrote:
       | I'm not sure people have been yelling that the spec or form
       | factor are the problem, they've been yelling that NVIDIA is the
       | problem, exactly for how they implemented and supply a 12V
       | solution (which includes both the physical products they supply,
       | and the messaging they've put out around it) which this article
       | yet again underlines as being both real, and even worse than
       | initially thought.
       | 
       | No need to beat about the bush, the "certain celebrities" is
       | folks like JayzTwoCents who are willing to destroy their
       | relationship with NVIDIA over this by showing that a company
       | intentionally sells products they know are fire hazards, lied
       | about that to the public, doubled down on the lies when they got
       | called out on it the more people staretd reporting on it, and
       | kept doing so all the way up to when official paperwork showed
       | they knew about the issue and went ahead with it all along. The
       | initial reports were "this is dangerous", but escalated to
       | "NVIDIA knew this would could catch fire and pushed for sales
       | anyway, this cannot be okay".
       | 
       | If this was a cable that just bricked cards, or even entire
       | computers, that's a dick move but it's just a computer. No one
       | died, just some hardware got destroyed. However, that's not what
       | it does: it can literally start fires and people can die, and
       | there is publicly available documentation that shows that NVIDIA
       | is strictly liable (in the legal sense) for gross negligence
       | (again in the legal sense). NVIDIA should be all kinds of sued
       | and federally fined here.
        
         | Sakos wrote:
         | The last time I commented on a previous story about this
         | debacle, people pointed out that molex melt all the time and
         | that this wasn't a big deal. I saw that they were right that
         | molex do melt sometimes, but these adapters have been on the
         | market for what, a week? It felt like there was something wrong
         | with the quality for it to happen so often with specifically
         | these parts.
        
           | hn8305823 wrote:
           | Molex connectors do indeed melt but rarely catch on fire.
           | Molex connectors are made of nylon which melts as low as
           | 170C.
           | 
           | The pictures of the internals of this adapter are absolutely
           | horrifying. This is 737-MAX style gross negligence: Several
           | consecutive worst practices/shortcuts that ensure
           | catastrophic failure one way or another.
        
           | Sohcahtoa82 wrote:
           | > people pointed out that molex melt all the time and that
           | this wasn't a big deal.
           | 
           | They're insane.
           | 
           | I've been building my own PCs for over 20 years. I've NEVER
           | had a connector melt.
           | 
           | I've never HEARD of anyone having a connector melt outside of
           | maybe a post to /r/WellThatSucks of a single melted
           | connector.
        
             | paulmd wrote:
             | molex-to-SATA adapters melt all the time because the
             | "molded" SATA connectors use thermoplastic that starts to
             | soften well before it actually melts... and those plastic
             | temperatures are low enough that you hit them in an average
             | case. They are unsafe even for HDD usage let alone the
             | (insane) people who use them for GPU mining rigs. Just get
             | some custom 6-pin strings made, people, it's cheaper than
             | burning your farm down lol
             | 
             | that said it's the SATA side that's the problem, not molex.
             | 
             | molex does have a reputation for scorching and arcing at
             | higher current levels though. So does the Tamiya connector
             | that is super common in the RC car world, terrible
             | connector.
        
           | formerly_proven wrote:
           | Melty Molices mostly affected the old 5 1/4" HDD connector
           | and that was only really a problem because of extremely poor
           | tolerances in third-party connectors leading to bent-open
           | sockets. Mate-n-lok, which is the same type of connector
           | (available in a gazillion variations), is used in huge
           | numbers to this day and has zero reliability issues because
           | the connector design is unproblematic if done correctly.
           | 
           | What's sorta new here is that these are genuine connectors -
           | not aftermarket dreck - and they're already having issues at,
           | as I understand it, around 70-80 % of the specced load
           | (specified for >600 W, while these GPUs are normally limited
           | to 450 W or so).
        
           | rrss wrote:
           | How often does it happen? I cannot tell from the article.
        
             | wnevets wrote:
             | Too early to tell. Not only are the cards with these
             | adapters just now getting into the hands of consumers but
             | some people didn't realize there was melting taking place
             | until they unplugged the adapter.
        
             | TheRealPomax wrote:
             | Fun fact: it doesn't even need to happen once for a company
             | to be guilty of gross negligence when there's a public
             | paper trail that shows they knew about the problem, then
             | went ahead with selling products with that problem.
             | 
             | But: it's happened more than once now, which is the exact
             | bar required to clear to count as evidence of a pattern of
             | real world damages in court.
        
               | [deleted]
        
         | UI_at_80x24 wrote:
         | I wish I was a fly on the wall when the conversation at EVGA
         | went down. The timing is a little too perfect.
        
           | [deleted]
        
         | rhn_mk1 wrote:
         | Having heard "fire hazard" from a nvidia fan as a justification
         | for their user-hostile signed firmware (it blocks free drivers
         | from using the cards at full speed), now I think we can put
         | that excuse among the myths.
         | 
         | I see no good will in blocking free Linux drivers, just the
         | same crap that earned Torvald's salute.
         | 
         | That they now opened a kernel driver is small improvement if
         | the firmware remains signed.
        
         | aftbit wrote:
         | I was with you until: "and fined for half their annual profits
         | by the FCC". Not only is that not the FCC's job, but that's an
         | insane fine for a relatively small problem. If anything, NVIDIA
         | should be required to recall the adapters and replace them with
         | working ones at their cost, and maybe pre-fund a settlement
         | fund to pay out in cases of damage due to this connector.
        
           | sn0wf1re wrote:
           | If they were aware of the risks/problem then I do not think
           | it is unfair punishment. We as a society need to have a
           | stronger response to companies choosing profits over safety.
        
           | daniel-cussen wrote:
        
           | Brian_K_White wrote:
           | An honest mistake is different from an informed decision.
           | 
           | The consequences are not small, and the action was apparently
           | fully informed. Those two things combined constitute a very
           | Big Deal deserving no forgiveness at all.
        
           | anonymousab wrote:
           | Corporations aren't going to adjust their behaviour until the
           | punishments for their malfeasance start becoming a true
           | existential threat
        
             | xxpor wrote:
             | They spend _lots and lots_ of money to make sure this doesn
             | 't happen. It's more or less why the entire right wing
             | legal movement (Federalist Society et al) exists.
        
               | saltminer wrote:
               | Sadly, this. "Tort reform" is the name of the game, and
               | it's depressing how well it's worked. Much like the more
               | recent efforts to rebrand estate taxes as a "death tax"
               | to bolster public support for the wealthy giving up less
               | of their inherited fortunes.
        
             | galangalalgol wrote:
             | Agreed, and if this also seems like a valid case for
             | holding individuals accountable if it can be shown they
             | knowingly ignored or covered the situation. Perhaps
             | criminally so if there were injuries.
        
           | MisterBastahrd wrote:
           | Sorry, but that's nowhere close to being enough.
           | 
           | This isn't an "insane fine" or a "relatively small problem."
           | 
           | These things are dangerous and they had to know that they
           | were dangerous when they shipped them. They deserve to get
           | financially nuked for being so ridiculously irresponsible.
           | Companies that put profits ahead of consumer safety shouldn't
           | be allowed to have profits.
        
             | karamanolev wrote:
             | There's "they should've known, but it seems they didn't",
             | which is bad engineering and should be punished somewhat.
             | 
             | On the other hand, this seems to be "they knew about it and
             | went ahead anyway" a la Boeing 737 MAX, which is many, many
             | levels above and it should be the equivalent to a financial
             | nuke, IMHO. Recoverable, but something that is talked about
             | in the company for the next 15 years.
             | 
             | Dare I say, if there's provably a person who was presented
             | with this as a problem and overrode the decision so as not
             | to delay the problem, there should be criminal liability
             | for them?
        
               | MisterBastahrd wrote:
               | Yes, there should be, in addition to financial penalties.
               | Unfortunately, what is considered to be a "fair"
               | punishment for a corporation rarely ever reaches the
               | level of criminal prosecution and never forces the
               | corporation to change its operating strategy. In an ideal
               | world, corporations would be punished financially,
               | criminally, and be forced to reorganize whenever they
               | attempt to wipe obvious dangers under the rug for
               | profit's sake.
        
           | counttheforks wrote:
           | > relatively small problem
           | 
           | Setting the houses of their customers on fire is a small
           | problem??
        
             | kube-system wrote:
             | Setting people's houses on fire is in fact a large problem.
             | Potentially setting houses on fire is also a problem, but
             | it is "relatively smaller" than the liability of actually
             | injuring or killing someone, or hundreds of people.
             | 
             | Melting connectors is not all that uncommon of an issue
             | with shoddy electronics. Well designed hardware rarely has
             | this problem, but if you buy enough no-name import junk,
             | you'll probably run into this with other products. (I have)
             | Most of the time it does not cause a fire.
             | 
             | This isn't okay, but it also isn't Surfside condo collapse
             | levels of bad.
        
               | sudosysgen wrote:
               | This is a high amperage connector. It is significantly
               | more prone to catching on fire that the majority of
               | connectors in no name import junk in case of it melting.
        
               | kube-system wrote:
               | Fire is caused by heating something to its ignition
               | temperature, and heat is a function of the conductor its
               | flowing through just as much as it's a function of the
               | current itself.
               | 
               | The current provided from a base-spec USB charger is
               | enough to start a fire, given the right (er... wrong)
               | conductor. Plenty of us remember lighting Estes rocket
               | motors with flashlight batteries (zinc carbon, no less)
               | as a kid.
        
               | sudosysgen wrote:
               | When a junction melts, it is often the case that the
               | connection itself will become much more resistive, which
               | will make it heat up more and more. For an application
               | where the PSU might be able to supply hundreds of Watts,
               | it's much easier for things to get very hot and ignite.
               | 
               | This is despite the use of a good conductor.
               | 
               | And the issue is that when the power source is that much
               | more powerful you're more likely to make whatever is
               | nearby catch fire.
        
           | TheRealPomax wrote:
           | Fair enough, changed it to just federally fined. But maybe
           | going "we know this can cause fires, we're going to sell it
           | anyway" should come with an insane fine. If you're willing to
           | burn down the place for sales, see if you can recover from
           | the same being done to you. And if you can't, maybe that
           | _should_ be the price for prioritising sales over human
           | lives.
        
             | mike_d wrote:
             | It does come with an insane fine. You just need to report
             | it to the Consumer Product Safety Commission.
        
             | [deleted]
        
           | dtjb wrote:
           | >a relatively small problem
           | 
           | This is an overcorrection to OPs comment. If Nvidia knew
           | about the very real risk of fire but continued to push unsafe
           | products with misleading statements, that is a significant
           | problem and hints at a fundamental rot within the company.
        
             | RosanaAnaDana wrote:
             | This is the bigger issue that isn't addressed by what
             | effectively is a minor tax on gross negligence.
        
           | cmsj wrote:
           | a nice juicy fine would certainly make sure they wouldn't
           | knowingly release fire hazard GPUs again.
        
       | t3estabc wrote:
       | this is a testthis is a testthis is a testthis is a testthis is a
       | testthis is a testthis is a test
        
       | zepmck wrote:
       | People pretending to do professional work with low budget/quality
       | workstation. If you cannot afford to operate a 4090 according to
       | specs, you cannot blame NVIDIA if things go wrong.
        
         | Sohcahtoa82 wrote:
         | What spec do you think users are violating?
        
         | mceachen wrote:
         | My understanding is that the failure is the included power
         | adapter.
         | 
         | If I spend both of $1600USD on a product, I assume the included
         | adapter will work.
        
         | gambiting wrote:
         | Eh? What are you talking about? The part melting and damaging
         | the card is the adapter that's literally provided by Nvidia.
         | How is this "not according to spec"????
        
       | dvdkon wrote:
       | Here I am, crimping wires for a 3D printer that will draw at most
       | a quarter of the current, and NVIDIA is just giving out
       | connectors with shoddily _soldered_ connections? Shame! It also
       | looks like they think strain relief is optional.
       | 
       | EDIT: This is a crimped wire for ATX power connectors:
       | https://modmymods.com/media/catalog/product/cache/1/image/a8...
       | 
       | Notice that the insulation is also crimped, making sure that the
       | actual electrical connection doesn't flex.
        
         | [deleted]
        
         | NotYourLawyer wrote:
         | Soldering is perfectly fine, if you do it correctly.
        
           | jmole wrote:
           | soldering is a terrible idea for cables that need to flex,
           | which is why you'll typically only find them used in
           | applications with major strain relief, or inside non-user-
           | serviceable enclosures (e.g. inside an ATX PSU)
        
             | snovv_crash wrote:
             | You need strain relief either way, soldering or crimping. A
             | well soldered joint will have lower resistance than a
             | crimped one and be more resistant to vibrations.
        
               | jmole wrote:
               | In most applications crimping offers better vibration
               | resistance and better conductance. Most solder is on the
               | order of 10x as resistive as pure copper, or more.
               | 
               | Welding can be better than solder or crimp, but it's
               | uncommon except for applications like wire-bonding and
               | battery cell termination.
               | 
               | In general, any application where you have a fixed end on
               | one side and a free end on another (wire to board, wire
               | to panel, wire to X), a crimp termination is by far the
               | better choice.
        
           | programmer_dude wrote:
           | It's not fine if it's going to melt.
        
             | Nextgrid wrote:
             | It will only melt if it's either undersized or a bad joint
             | to begin with. A properly-sized, well-made solder joint
             | will have no reason to melt unless heated externally by
             | something else failing.
        
           | bayindirh wrote:
           | Soldering is not the correct solution for high power/current
           | applications. Car manufacturers openly instruct "do not
           | solder, use included crimp connectors" for fuel injector
           | wiring repairs for example.
        
             | kube-system wrote:
             | This is because soldered connections not suitable for
             | applications subject to vibration or repeated motion. They
             | will weaken and break. For stationary applications,
             | soldered connections have superior electrical
             | characteristics.
        
             | jefftk wrote:
             | I'd be wary about extrapolating from cars or other
             | vehicles: they expose their components to far more
             | vibration and shocks than is typical in a house or
             | datacenter.
        
               | bayindirh wrote:
               | As an HPC system administrator who also works on
               | hardware, I'd say that the servers' life are equally hard
               | from power and temperature perspective. Their life is not
               | easier because they are not vibrating.
               | 
               | A server under max load 7/24/365 is really testing its
               | design limits.
        
               | sudosysgen wrote:
               | Sure, but vibration affects solder joints in a way that a
               | server at max load doesn't.
        
               | sangnoir wrote:
               | Counterpoint: cars don't get their cables moved about so
               | often, or bent at sharp angles for aesthetic reasons.
               | Soldering without additional strain relief is a terrible
               | idea - even for low currents.
        
               | LegitShady wrote:
               | car cables move all the time when cars are driving.
        
               | sangnoir wrote:
               | I never said they don't - rereading, I should have used
               | the word "manipulated", because I meant "moved (by a
               | person)": a PC[1] gets opened more frequently often than
               | a car hood.
               | 
               | 1. Especially a pc with an RTX 40X0 card - because only
               | enthusiasts are buying them at this point.
        
             | postalrat wrote:
             | Then it might be a bit alarming to you that the connector
             | itself is soldered to the board.
        
             | ars wrote:
             | That's because crimping is a more reliable connection,
             | mechanically. It's less likely to fail.
             | 
             | Solder has better electrical characteristics, but that
             | benefit is outweighed in the field because the connections
             | can break and fail.
        
         | fatneckbeardz wrote:
         | 600 watts is an extraordinary amount of power for these tiny
         | cables/connectors, thats something id think would be something
         | like an Anderson Powerpole , or XT 90, or some kind of high
         | power connector if i was building a little robot or something.
        
           | jetbalsa wrote:
           | I'm really surprised they didn't do something like a XT90
           | with some signal wires on it. I've seen IEC Cables with more
           | girth than these things.
        
           | gw99 wrote:
           | Just a heads up. Powerpoles melt too. Nasty things.
        
       | Mountain_Skies wrote:
       | Coming soon to the EU: graphics cards powered by three USB-C
       | connectors.
        
         | empiricus wrote:
         | Actually graphics cards will be interdicted because they
         | consume too much electric power. Just like high-end TVs next
         | year.
        
       | whatever1123 wrote:
        
         | tester756 wrote:
         | You just realized that the world runs on mediocrity?
         | 
         | PS: think of it when visiting doc, mechanic, etc :)
        
           | whatever1123 wrote:
           | I don't know dude. Ironically if you are poor in a place like
           | Boston, you can go to Man's Greatest Hospital.
           | 
           | Programmers were definitely elite at one point. The gulf
           | between them and the medicine kids has grown a lot.
        
             | snovv_crash wrote:
             | I guess you haven't used the medical system recently -
             | despite monumental efforts by doctors, the standard of care
             | is actually horrific when you take the whole system into
             | account. It has turned into a system of keeping patients
             | alive rather than delivering optimal outcomes, and much
             | like the legal system, if you have someone knowledgeable
             | who can put in considerable time on your case your outcomes
             | will be a lot better.
        
               | [deleted]
        
         | LeifCarrotson wrote:
         | If a single PM can screw up this badly, that's a complete
         | failure of the NVIDIA bureaucracy.
         | 
         | The answer to the "5 whys" should never be that one individual
         | made a bad decision, when the bad decision has consequences of
         | this magnitude.
         | 
         | It's a systems issue.
        
           | [deleted]
        
         | [deleted]
        
         | malfist wrote:
         | Nice of you to blame everyone.
         | 
         | What's your proposal to fix it?
        
           | whatever1123 wrote:
           | You know why there isn't trash anywhere? Some people never
           | litter.
           | 
           | Not anti dumping laws. Not a particularly efficient or
           | effective waste management system.
           | 
           | This is the analogy. The plastic in the oceans comes from
           | poor, shitty people in Southeast Asia. They litter! Accept
           | it. It is a cultural problem! They just suck.
           | 
           | If you don't believe that, then you don't believe in
           | Enlightment, you don't believe in human progress. It doesn't
           | have to be a law or even some kind of convergent system.
           | 
           | > What's your proposal to fix it?
           | 
           | The simple answer is, next time you login to Slack, just do a
           | better fucking job.
        
             | malfist wrote:
        
             | [deleted]
        
         | nomel wrote:
         | And this is why hardware is hard; it requires good engineering
         | _culture_. Unfortunately, being meticulous is directly opposed
         | to profit.
         | 
         | You have look at the likes of Apple to find good hardware
         | culture, where "image" has a higher priority, allowing that
         | engineering culture to flex a bit more than usual.
        
           | cogman10 wrote:
           | The thing that sucks is a culture turning bad does not
           | immediately result in loss of sales or revenue. Once you get
           | big enough, your culture can decline for years or even
           | decades.
           | 
           | Yet in the first, second, or even third year of terrible
           | culture changes, a business will likely be patting themselves
           | on the backs for how fast they got stuff out the door and how
           | much money they saved by reducing head count.
        
         | colechristensen wrote:
         | It's not individual contributors responsible for bad quality.
         | 
         | It is corporate, investing, and all around culture that does
         | not value quality and barely has a sense of it at all (Steve
         | Jobs was an asshole, but he had a strong sense of quality, it
         | doesn't matter if you agree with his choices, that's taste,
         | it's hard to argue that things he drove apple to produce had
         | _quality_ )
         | 
         | People instead value deadlines, ever advancing middle manager
         | careers, and tiny differences in margin percentages.
        
           | btown wrote:
           | The conundrum is: how do you build an organization that can
           | simultaneously scale super-linearly and retain a commitment
           | to quality? There are plenty of businesses that pursue stable
           | profitability or linear growth and maintain quality. But the
           | faster an organization grows, the more it becomes metrics-
           | driven, and outside of a regulated industry where quality is
           | a requirement, it's hard to balance the priorities.
           | 
           | I'm reminded of Honeycomb's pointed experiment in setting up
           | a rotating employee seat as a voting member of their board:
           | https://www.protocol.com/workplace/board-of-directors-
           | honeyc... - this type of thing can contribute towards
           | striking the right balance at all levels of an organization.
           | But it's definitely not a solved problem.
        
             | colechristensen wrote:
             | Apple did by having a quality dictator at the top. It would
             | have lasted a lot longer but his unusual disposition led
             | him to accidentally kill himself with a strange diet.
             | 
             | In other words you can't have a company run by the MBA
             | mindset, there must be a superseding set of values beyond
             | business school values.
        
               | [deleted]
        
           | outworlder wrote:
           | > It's not individual contributors responsible for bad
           | quality.
           | 
           | > It is corporate
           | 
           | When dealing with corporations, that's _always_ the case.
           | 
           | Even if it turns out that a single person was able to make a
           | large mistake that went unnoticed, guess what, it's still a
           | corporate failure.
           | 
           | > People instead value deadlines, ever advancing middle
           | manager careers, and tiny differences in margin percentages.
           | 
           | Of these, the managers trying to advance are the most
           | harmful. I've seen situations where actions of managers
           | trying to advance their careers actually caused harm to the
           | company, but they were able to place the blame elsewhere.
           | Some are pretty good at that.
        
           | whatever1123 wrote:
           | > People instead value deadlines
           | 
           | This is definitely true, and is a cultural thing, and
           | deadlines are like, one of the most quintessentially big
           | corpo dysfunctions ever.
           | 
           | > It's not individual contributors
           | 
           | When you have a surgical complication, do you blame the
           | admins and the hospital bureaucracy? No.
           | 
           | When you lose a court case, who's fault is it? Do you blame
           | the clerks? Do you blame the Attorney General? No.
           | 
           | > does not value quality and barely has a sense of it at all
           | 
           | I haven't met an "individual contributor" who cares about
           | that shit in my life. Among the ones who do a lot of work all
           | day, the rare few, the guys with the greasy hair and the
           | track pants who live alone in Sunnyvale for a year, they have
           | consistently been the least interested in a holistic sense of
           | quality.
           | 
           | You will say, oh the managers gave them the goals. Dude the
           | managers do fucking nothing. They say the same shit to
           | everyone, the "IC" has all the agency. They can work, or not
           | work as much as they want.
           | 
           | In fact the hardest working ICs I know are relentlessly
           | optimizing for meeting immediate, dull goals. The ICs are
           | almost always working on consumer products that are
           | tremendously shitty and buggy.
           | 
           | Listen, who the fuck is responsible? Everyone! Why do you
           | give the trackpants guys a pass? They're dicks!
        
         | radicaldreamer wrote:
         | Seems like you have a chip on your shoulder that has nothing to
         | do with the topic at hand.
        
           | wruza wrote:
           | The sentiment is agreeable though. You see smart people
           | everywhere, but then turn to their products and these have
           | obvious flaws. Not hidden ones, not triggered in particular
           | situation. Ones you can't ignore, which greet you and tell
           | they are your new neighbors happy to meet you everyday.
           | 
           | MDN recently screwed up their search input. It stays
           | collapsed until you click on it (some brightest mind decided
           | it'd be so cool?), and also there is no I-cursor until you
           | start typing. So confusing.
           | 
           | CDNJS recently "updated" their site so you see a search
           | prompt, (maybe) click on it, start typing aaand it loses all
           | letters you've typed before 3-4 seconds passed since the page
           | load. Who allowed to push that into production? That there
           | _is_ some guy responsible for this may drive nuts.
           | 
           | I could go on but am depressed enough atm.
        
         | Ygg2 wrote:
         | > Why are third party amateurs solving this?
         | 
         | They have a vested interest.
         | 
         | The real issue is Moore's law is on its last legs and you can't
         | squeeze blood from stone. So manufacturer resorted to
         | increasing TDP and bringing us closer to voltage that are more
         | likely to cause issues.
         | 
         | As for general shittiness. That's been going since - forever.
         | Not sure why you're complaining. It's both incorrect and
         | offtopic. Race to the bottom is as old as human civilization.
        
       | EugeneOZ wrote:
       | well, mine works fine (with multiple hours of heavy usage), the
       | card itself is quite cold (48-65 C), but the news is worrying,
       | for sure...
       | 
       | Gigabyte promised 4 years warranty - I hope they'll keep their
       | promise.
        
       | fazfq wrote:
       | Maybe I am running low on coffee because I felt like I was having
       | a stroke while reading this post.
       | 
       | >However, the "safe" is only valid if e.g. the used supply lines
       | from the power supply with "native" 12VHPWR connector have a good
       | quality and 16AWG lines or at least the used 12VHPWR to 4x 6+2
       | pin adapter also offers what it promises.
       | 
       | Like what is this? I know those words, but that sentence makes no
       | sense.
        
         | cmsj wrote:
         | Igor is awesome, but his English has to be read with a generous
         | spirit ;)
        
           | ideamotor wrote:
           | I didn't realize the author was non-native so now I feel bad,
           | thank you.
        
         | [deleted]
        
         | WithinReason wrote:
         | I think all (or at least some) of Igor's articles are machine
         | translated to English
        
         | NotYourLawyer wrote:
         | His English is way more comprehensible than my German.
        
         | m12k wrote:
         | >However, the "safe" is only valid if e.g. the used supply
         | lines from the power supply with "native" 12VHPWR connector
         | have a good quality and 16AWG lines or at least the used
         | 12VHPWR to 4x 6+2 pin adapter also offers what it promises.
         | 
         | However, the "safe" designation for the connector is only valid
         | if e.g. the used cables from the power supply, with a "native"
         | 12VHPWR connector, are of good quality, and the 16AWG cables or
         | at least the used 12VHPWR-to-4x6+2-pin-adapter also live up to
         | what they promise.
        
         | kotlin2 wrote:
         | I don't think it's valid to use "only" with "e.g". "e.g" means
         | it's an example, which implies the existence of other cases
         | that satisfy the criteria. "Only" implies some uniqueness of
         | the subject.
        
           | lelandfe wrote:
           | It does appear the e.g may be dropped here but it's not hard
           | to think of ways to use the two together:
           | 
           | "It's only safe given certain conditions, e.g x"
        
           | klodolph wrote:
           | I would say that it's valid.
           | 
           | "It's legal to drive only if e.g. you have a driver's
           | license."
           | 
           | It's not a great way to say things but it is meaningful. The
           | meaning is "only if [unspecified list of things], and [x] is
           | an example of an item in that list".
           | 
           | > However, the "safe" is only valid if e.g. the used supply
           | lines from the power supply with "native" 12VHPWR connector
           | have a good quality and 16AWG lines or at least the used
           | 12VHPWR to 4x 6+2 pin adapter also offers what it promises.
           | 
           | Here's my interpretation:
           | 
           | > However, the term "safe" is only valid if certain things
           | are true, e.g., the used supply lines from the power supply
           | with "native" 12VHPWR connector are of good quality...
           | 
           | I can't interpret the rest of the sentence.
        
         | treeman79 wrote:
         | That's exactly what it feels like to have a mild stroke. I know
         | these words individually.
        
         | ideamotor wrote:
         | Actually, that is NOT (as such), "how I feel" when I read
         | articles like this - NOT written by professionals (But who is
         | one anyways) - Confucius says, which I tend to agree, which if
         | you, like me, concur.
        
         | dheera wrote:
         | I really don't understand why power connectors aren't just +
         | and - anymore. There's no reason for more than 2 connectors.
         | 
         | What is this 12-pin Molex bullshit? Why the hell would you need
         | 12 pins? Use some Anderson PowerPole connectors, XT30/60/90, or
         | some other connector designed to handle high currents.
        
           | [deleted]
        
           | numpad0 wrote:
           | Totally agreed. I'm guessing they wanted to do it with
           | minimal change in tools and materials.
        
         | yaddaor wrote:
         | The site is hosted on a .de domain, so there is a very chance
         | that the author is German. It would have been nice if you had
         | considered that before writing such angry and negative words
         | about the author's work.
        
           | fazfq wrote:
           | Where do you see the anger in my message? I thought that the
           | "stroke" part made it clear that my comment was facetious.
        
             | programmer_dude wrote:
             | You should feel bad he is also not a native speaker :)
        
       | epolanski wrote:
       | I love how this issue somehow is bigger than the cards costing
       | 2000 euros and drawing 600 watt if not more.
        
         | [deleted]
        
         | rowanG077 wrote:
         | Because either of those are chump change to an entire family
         | dying in a blaze of fire, maybe?
        
         | hypertele-Xii wrote:
         | Fire hazard has the potential to cost lives.
         | 
         | Lives are more expensive than 2000 euros or 600 watts of
         | electricity, even at current prices.
        
       | dang wrote:
       | Recent and related:
       | 
       |  _Users report Nvidia RTX 4090 GPUs with melted 16-pin power
       | connectors_ - https://news.ycombinator.com/item?id=33327515 - Oct
       | 2022 (99 comments)
        
       | bri3d wrote:
       | I'm curious as to why no vendor has yet tried to push a 24V or
       | 48V PSU rail standard to reduce the amperage pushed over these
       | crappy pins. Of course then the board itself needs to be able to
       | handle the higher voltage for down-conversion, but don't these
       | systems usually use switching / Buck regulators as the first
       | stage anyway, which could easily be tuned to handle the higher
       | input voltage with minimal loss added?
        
         | zbrozek wrote:
         | I sort of wish that we used something more like this[0] for
         | add-on module power connectors. And a better mechanical design.
         | The perpendicular card form-factor is silly.
         | 
         | [0] https://www.farnell.com/datasheets/1716278.pdf
        
         | allenrb wrote:
         | Yes, exactly.
         | 
         | Even before this issue, ATX 3.0 felt like a missed opportunity
         | to start phasing in 48v DC power. Very disappointing.
        
         | LarsAlereon wrote:
         | The Power Stamp Alliance was working on 48v modular VRMs for
         | HPC and servers, but I haven't seen any news in the past couple
         | of years.
        
         | paulmd wrote:
         | > I'm curious as to why no vendor has yet tried to push a 24V
         | or 48V PSU rail standard to reduce the amperage pushed over
         | these crappy pins.
         | 
         | people on another HN thread yesterday literally were bitching
         | about the cost (not safety - cost) of a $1 adapter being
         | "forced" on them by NVIDIA and now you want to tell them
         | they're gonna have to buy a whole new PSU? that's gonna be
         | quite the comment thread.
         | 
         | but yeah you're right in general, gotta move to 48V to move
         | forward, pushing infinite amps at low voltages is completely
         | bass-ackwards. The pcie-add-in-card form-factor needs a rethink
         | in general, the GPU is now the largest and most power-hungry
         | part of the system by far, and power isn't the only problem
         | imposed by that ancient AT standard. imo we should move to some
         | standardized "module sizes" for GPUs so that we solve GPU sag
         | and cooling at the same time.
         | 
         | the "pcie card" of the future should be a 12x6x4" prismatic
         | module that slides on rails into a socket that provides a PCIe
         | edge connector as well as a XT90 jag (or similar) for power, at
         | a defined physical location, with 48V. Airflow should happen in
         | a defined direction and manner, whether that's axial flow or
         | blower-style.
         | 
         | also please for the love of god, _hot air rises_ , the AT
         | standard means axial coolers are pushing against convection.
         | But we couldn't even get BTX to take off, which is a trivial
         | modification of ATX to reverse the orientation and fix this
         | problem, the DIY market is deeply deeply rooted in the 1980s
         | and will not abandon ATX without some nudging. Enterprise and
         | OEM are fine, their volumes are big enough to do their own
         | thing, but consumers are spinning their wheels fighting IBM's
         | 1980 mistakes.
         | 
         | It's gonna take government intervention, the EU is gonna have
         | to step in and do it like with USB-C cables. The mess of
         | arbitrary card dimensions slowly growing larger over time is
         | the exact same problem that we had pre-standardization with USB
         | too.
        
           | codeflo wrote:
           | I think at these sizes and weights, the GPU needs to become a
           | board that you screw onto the case, like a second
           | motherboard. Sagging solved. Now the noisy fan, replace it
           | with a preinstalled water cooling block to be integrated with
           | your overall cooling solution, voila: noise and weird airflow
           | solved as well.
        
             | squeaky-clean wrote:
             | All in one water loops aren't as great as they make them
             | out to be. You still need fans on a radiator, they can be
             | quieter because they're often larger. Pump failures are
             | more common than fan failures and harder to repair.
             | Traveling with a water loop is a nightmare unless you drain
             | it first.
        
               | gtirloni wrote:
               | Who's traveling with an ATX tower?
               | 
               | In any case, I've always used the standard water cooling
               | kits and they never failed me (as opposed to constantly
               | breaking fans and huge temperatures). Maybe it's my
               | environment.
        
               | dvngnt_ wrote:
               | people with apartments who move
        
             | paulmd wrote:
             | Exactly, that's what I mean by "on rails". Imagine sliding
             | a server PSU into place and it latches as it seats... do
             | that but with a "GPU module". The case itself needs to be
             | part of the support to fix GPU sag, and that's only
             | possible with dimensional and connectical standardization.
             | 
             | Smaller stuff that still reflects the "add-in-card"
             | intention of the original design is fine, you can still
             | have a bunch of PCIe slots for network cards and USB
             | controllers if you want. But the GPU is not really an "add-
             | in-card" anymore and it's problematic in a design sense to
             | keep acting like it is, let alone to have _every single
             | product on the market_ doing something completely different
             | dimensionally.
             | 
             | That's the "pre-USB-micro" era problem that USB faced.
             | 
             | There's no need for water though, that's a whole can of
             | worms.
        
               | flatiron wrote:
               | AGP part deux
        
           | xbar wrote:
           | The inside of consumer-assembled PCs is a terrible place for
           | government intervention.
        
           | Tsiklon wrote:
           | The workstation world sort of broadly standardises card
           | length, allowing for support on 3 sides (PCI Bulkhead, PCI-E
           | Slot, and a card extension with a blade to fit in a rail
           | guide towards the front of the chassis). Even old cheese
           | grater Apple Mac Pro's had this setup.
        
           | 323 wrote:
           | At 500W+ maybe the GPU should have it's own dedicated power
           | supply with it's own power cable. And then the GPU
           | manufacturer could optimize the voltages and amps since they
           | control the whole power chain.
           | 
           | You could have some standardization, a simple power
           | negotiation protocol "I can supply 500W at 24V or 48V" and a
           | standard (big) connector.
           | 
           | A long time ago the display was powered from the main unit,
           | but we moved passed that and now displays have their own
           | power cable.
        
             | yurymik wrote:
             | Well, it didn't go that well [last time around](http://www.
             | x86-secret.com/pics/divers/v56k/finalv5-6000.jpg).
        
               | metadat wrote:
               | > Forbidden
               | 
               | > You don't have permission to access this resource.
        
               | wlesieutre wrote:
               | Blocked based on referrer, if you go up to your URL bar
               | and hit enter it should load directly
        
               | metadat wrote:
               | Cool, that fixes it- thanks!
        
             | jandrese wrote:
             | 500W is going to be one hell of a wall wart.
        
               | NavinF wrote:
               | Eh not really, you just need to get two of these and
               | connect the sync pins together:
               | https://hdplex.com/hdplex-fanless-250w-gan-aio-atx-
               | psu.html
               | 
               | Way smaller than SFX PSUs and "gaming" laptop power
               | bricks.
        
             | toast0 wrote:
             | > A long time ago the display was powered from the main
             | unit, but we moved passed that and now displays have their
             | own power cable.
             | 
             | That was just switched A/C, so you didn't need to turn off
             | two switches when you were done.
        
         | ryzvonusef wrote:
         | Because we are still in the era of trying to move from 3V/5V to
         | 12V!
         | 
         | Power Supply Manufacturers and Board Manufactures can't still
         | agree on this little step. 24V/48V is a bridge too far!
         | 
         | Same issue has been plaguing the automobile industry, they
         | can't wait to move from 12V to 48V for their low voltage works,
         | but who will bell the cat?
        
           | vhab wrote:
           | Interestingly though, there's a slow but steady push towards
           | 24V on boats. More electronics are available in 24V version
           | or simply support both 12V and 24V. And new boats tend to
           | have a 12V and 24V bank. Electric sailboats usually have a
           | 48V bank as well.
           | 
           | And in an entirely different area, 3D printing completed
           | their transition from 12V to 24V some years ago, and
           | currently there's an active push towards 48V.
           | 
           | I'm personally just looking forward to not spend a fortune on
           | cables due to low voltages. It's frustratingly expensive to
           | carry 12V a decent distance with some higher current draws.
        
           | jandrese wrote:
           | > they can't wait to move from 12V to 48V for their low
           | voltage works
           | 
           | That would be a surprising move to be sure. I assume you mean
           | low wattage?
        
             | ryzvonusef wrote:
             | Sorry, Yes.
        
         | amelius wrote:
         | Instead of introducing a new PSU standard, why not change the
         | connector, or add a connector for power.
        
           | masklinn wrote:
           | Because you still have the issue that you need to push insane
           | amperages if you're limited to 12V.
           | 
           | 600W out of 12V require 50 amps. This means you need more
           | connectors, thicker cables and you have more resistive losses
           | (= heat).
           | 
           | If you can get 48 V rails in, then you get down to a much
           | comfier 12.5A, you can get that through comfortably on a pair
           | of small-section wires.
        
         | GekkePrutser wrote:
         | I have a feeling this will make the DC-DC converters more bulky
         | and hot on the card itself.
         | 
         | Not entirely sure though as I've never built those.
         | 
         | However in the longer term it's probably time to start
         | optimizing for power efficiency just like Intel did after the
         | pentium 4. Cards can't keep increasing requirements in this day
         | of energy crisis. And the climate problem insures it'll be like
         | that for a long time.
        
           | postalrat wrote:
           | Is a 12v to whatever voltages the chips on the card use
           | smaller than a 48v to whatever voltages the chips use?
        
         | scalablenotions wrote:
         | As someone who tinkers with these things a lot, I'm happy for
         | the voltage I'm playing with not to be increased. There's
         | plenty of scope to improve the plug design without approaching
         | anything near the fragility of this nVidia adapter. There's no
         | pressing need to increase voltage.
        
           | mmoskal wrote:
           | I believe 48V would be used since 50V is the maximum voltage
           | considered "safe" for humans. This is also why USB-C goes up
           | to 48V.
        
             | sidewndr46 wrote:
             | It works the other way too. Low voltage is not really
             | "safe", we can just tolerate the risk most of the time.
             | Soak your hands in some saltwater & then grab 12 volt
             | rails, it'll hurt like hell.
        
               | flatiron wrote:
               | "Lick a 9V" is usually how I phrase it. Not the best
               | feeling in the world.
        
           | pathartl wrote:
           | The fragility I think is the main concern here, like you
           | said. I hate when I have three 12v, 8 pin connectors on my
           | GPU. I would be ecstatic between that and a small connector
           | that catches fire if it's not perfectly 100% seated and
           | strain relieved.
        
       | mjsweet wrote:
       | Please shoot me down in flames if needed, but maybe the CPU
       | should be on a riser board and the GPU on the mainboard? Look, it
       | seems to me that Nvidia needs to come up with its own x86
       | reference platform that revolves around supplying enough
       | resources to the GPU rather than relying on adding layer upon
       | layer of bandaid solutions. Listen, I'm not an engineer, and I
       | don't claim to understand voltage, amps and watts, but it seems
       | plainly obvious that if Nvidia wants to keep pushing discreet
       | components that need a lot of juice (as opposed to Apples SoC
       | strategy), then they need to come up with their own robust
       | reference platform.
       | 
       | On another note, did EVGA see all this coming?
        
         | some-guy wrote:
         | I was looking at an old video card I have, the GeForce 4600ti,
         | and remembering thinking it was _huge_ at the time compared to
         | my Voodoo 3 3000 before it. Then the 8800gt seemed huge. Then
         | the RX480. Then my current 3070.
         | 
         | At a certain point it's just stupid. My work laptop (Macbook
         | Pro M1 Max) can do some very impressive stuff in such a small
         | quiet package, and it just makes sense to go with a single
         | large APU in the future.
        
         | [deleted]
        
       | rob-olmos wrote:
       | Was there a good reason why they switched from 3x8pin to 1x12pin
       | for so much wattage?
        
         | Latty wrote:
         | I mean, firstly just that consumers hate having to plug in a
         | ton of cables: it's bad for aesthetics, it's annoying to manage
         | the cables, etc...
         | 
         | The bigger reason is probably to save board space as nvidia
         | have been shrinking the form factor of the actual boards so
         | they can dedicate more room to flow-through cooling. Connectors
         | take up a lot of board edge.
         | 
         | The new spec does also have data pins so the devices can
         | negotiate for power which could in theory reduce issues and
         | inform the consumer better about limits with future PSUs and
         | devices.
         | 
         | So there are advantages to the new spec, both specifically with
         | the form factor and more generally.
        
         | cmsj wrote:
         | They started this in the last generation where the founders
         | cards had a different-but-similar connector.
         | 
         | In this generation, they would actually need 4x8pin to reach
         | the 600W maximum these cards are supposed to be able to draw in
         | a fully overclocked state, so really this just comes down to
         | PCB space, but with the draw getting this high, the cable also
         | has some signalling wires so the card knows how many Watts it
         | should limit itself to.
        
           | rob-olmos wrote:
           | Ah interesting, thanks. Looking more into it, 4x8pin would be
           | 12x12v wires/pins, and the 12VHPWR only has 6x12v for the
           | same max wattage.
        
           | bcrosby95 wrote:
           | As someone ignorant to this space, considering the PCB is
           | already shorter than the heatsink, I assume the problem is
           | more of re-arranging other components on the PCB than the
           | actual physical space the final product takes up.
        
       | beebeepka wrote:
       | This whole thing is cracking me up. I guess this is why the 600w
       | card didn't show up. Maybe there was something to the pre launch
       | reports about cards melting themselves.
       | 
       | You know what would teach Nvidia? Buying more 4090 cards at crazy
       | prices. Vote with your wallets. Wait on actual lines to give them
       | more money for stupid products. Moar powah, more upscaling crap,
       | more latency. Moar
        
       | cmsj wrote:
       | Buildzoid on YouTube has a very good response to this - he does
       | not agree that the fault is the (still terrible) wires at the
       | back of the connector.
       | 
       | His reasoning is that the melting is happening down at the pin
       | end of the connector, not at the back, and the pins that are
       | melting the most are the edge ones, not the middle ones.
       | 
       | There's about 10 minutes of critique of the various ways other
       | people have been evaluating it, which is interesting for sure,
       | but the meat starts here: https://youtu.be/yvSetyi9vj8?t=744
        
         | impalallama wrote:
         | Really seems like splitting hairs when your specifying which
         | specific part of the cheap faulty component is "responsible".
        
         | deng wrote:
         | I agree, that does not make sense. There must be a problem with
         | the connection to the pins, and I'd really like to see a close-
         | up of the female connector on the problematic card, because if
         | stuff gets bend there out of shape even slightly, you can
         | quickly get high resistance.
         | 
         | EDIT: Watched the video further, and actually the cable has the
         | female side and they're using 2-split connectors. Yeah, that's
         | asking for trouble when you push 50 amps through those...
         | Sheesh, such shoddy stuff for a 1600$ card, just incredible.
        
         | buildbot wrote:
         | I think we would see more problems then with this connector -
         | while it is new to the consumer space, all of Nvidia's SXM
         | boards use molex micro-fit connectors for power. For example my
         | Dell C4140 has four 10-pin and four 8-pin micro-fit connections
         | to the SXM2 board, and they are bent at 90 degrees in the 1u
         | chassis.
        
         | Latty wrote:
         | Buildzoid is great, but worth noting he has a very specific
         | perspective which is that of a hardcore enthusiast (hence the
         | name of the channel), and he often points that out, but here I
         | think he ends up kinda just saying "why not just stick with the
         | old ones", which is fine in comparison to something that sucks,
         | but it doesn't hold up more generally for most people.
         | 
         | I get he may not care, but plugging four individual power
         | cables into a GPU is a pain for consumers, and having a single
         | connector for everything is great from the normal consumer
         | perspective.
        
         | washadjeffmad wrote:
         | The failure doesn't necessarily have to happen at the junctions
         | for the junctions to be a cause. It also makes sense that the
         | edge pins would heat up more because they're both surrounded by
         | less mass and a likelier conduit to distribute current and heat
         | based on the resistivity of the shown wiring.
         | 
         | Reminds me of the molded MOLEX to SATA adapters for HDDs that
         | suffer from similarly shoddy wiring design and are prone to
         | melting/fires. https://www.youtube.com/watch?v=fAyy_WOSdVc
        
           | deng wrote:
           | > The failure doesn't necessarily have to happen at the
           | junctions for the junctions to be a cause.
           | 
           | Indeed, it can also happen that because some connection is
           | failing, too much current flows through the remaining ones,
           | but this is clearly not what is happening here (because it's
           | not the inner connectors that are melting, and also because
           | Igor tried and couldn't replicate this). This really looks
           | like heat from high resistance, and the outer connectors are
           | simply those experiencing the most mechanical stress, and
           | it's really easy for these 2-slit-connectors to get bend out
           | of shape.
        
           | Latty wrote:
           | Ah yes, the classic "MOLEX to SATA? Lose your data!".
        
       | tpmx wrote:
       | My guess: The Taiwanese companies that build most of the GPU
       | boards will supply a fix sooner than Nvidia.
       | 
       | This is not exactly a hard thing to build in Taiwan.
        
       | mensetmanusman wrote:
       | Training cat detection melts cables.
        
       | Varloom wrote:
       | Why is the card has NOT been recalled already by now?
       | 
       | Are they waiting for some major house fire news to break out
       | before doing something.
        
         | dylan604 wrote:
         | The formula A x B x C = X, where A is the number of vehicles in
         | the field, B is the probable rate of failure, and C is the cost
         | of out-of-court settlement for that failure
         | 
         | s/vehicles/GPUs/
        
           | iwillbenice wrote:
           | That line from Fight Club really needs a whole lotta
           | asterisks and qualifications. There are plenty of forced-
           | recall scenarios where the manufacturer would prefer to use
           | the A _B_ C<=X approach but are not allowed or simply over-
           | ruled by government. Safety of life being the primary over-
           | riding concern, for good reason.
           | 
           | I'm pretty surprised this issue has gotten as long in the
           | tooth time wise as it has. I remember seeing a story about
           | this ticking-timebomb power connector like 1.5 months ago and
           | then heard nothing. I'm guessing there is scurrying behind
           | the scenes between then and now, but you never know.
        
             | lifehasavalue wrote:
             | Safety of life is not a boundless good. All of the good
             | things around us come at some risk to life. Of course, the
             | government's liability to overvalue safety (or to be more
             | neutral, to _inconsistently_ determine the value of safety)
             | in a particular proceeding (be it before a court in a
             | lawsuit or before a regulatory agency that might order a
             | recall), is just another part of the calculation.
        
             | dylan604 wrote:
             | >I'm pretty surprised this issue has gotten as long in the
             | tooth time wise as it has.
             | 
             | Just to remove ambiguity, are you saying that the ABC<=X is
             | long in the tooth or that the Nvidia connector is?
             | 
             | "Good" stories like ABC<=X, McDonald's coffee too hot, etc
             | are infamous for a reason lest someone forget and repeat.
        
         | smoldesu wrote:
         | Nvidia would be wise to push a patch limiting the power
         | consumption to safe levels, and offer a "hot-rod mode" as a
         | kernel parameter or something.
        
           | capableweb wrote:
           | Benchmarks (and therefore profits) would probably take a hit
           | if so, hence they are not doing it.
        
             | smoldesu wrote:
             | Which cards are even competitive with the 3090, much less
             | the 4080? It's perfectly acceptable for Nvidia to admit
             | they were wrong about power scaling _if_ they offer an
             | unlocked mode for idiots in fireproof houses.
        
               | squeaky-clean wrote:
               | I got a nice settlement check from AMD who had to admit
               | that their 8 core CPUs were actually 4 core CPUs with
               | dual parallel math units. I'm pretty sure a post-launch
               | reduction in power and performance of these cards would
               | qualify any current owners for a refund.
        
               | Y_Y wrote:
               | Nvidia are doing something like this now advertising
               | their CUDA TOPS as double their actual value but with a
               | footnote telling you this is for "sparse"
               | multiplications. Here "sparse" means each dot-product can
               | assume half of each input vector is zero.
        
               | m-p-3 wrote:
               | The RX 7900 XT should be fairly close and at a lower
               | wattage.
        
           | ploxiln wrote:
           | The thing is, the 4090 is the pushed-past-the-limit supercar
           | of the generation, these things always have issues. It's just
           | due to market conditions that they didn't release all the
           | somewhat reasonable cards (4070, 4080) yet. They don't want
           | to cannibalize 30xx series sales, they don't want to delay
           | 40xx series, they want the top of the charts. _shrug_
           | 
           | Just don't buy a 4090. The technology is incredible, the
           | model is stupid. Buy a 3080 Ti, or wait for a 4080. Like,
           | don't buy a $1 million Bugatti, you could get a $250k
           | supercar and it would probably be better for any real-world
           | use. (But do people buying $50k BMWs complain about these
           | things?)
        
       | maxwell86 wrote:
       | > xactly for how they implemented and supply a 12V solution
       | (which includes both the physical products they supply, and the
       | messaging they've put out around it) which this article yet again
       | underlines as being both real, and even worse than initially
       | thought.
       | 
       | I'd recommend people gloating to read the article. EDIT: (see my
       | edit below).
       | 
       | The conclussions are clear:
       | 
       | - The problem is NOT the new connection; that's fine. New PSUs
       | come with a connection that does not need any adaptor and those
       | are safe and work fine.
       | 
       | - The problem is a poor quality adaptor shipped with 4090s for
       | people that buy a 1600$ GFX but then skimp on a new PSU and want
       | to pair it with an old one (EDIT: skimp is out of place and
       | victim blaming, I'd guess it would be more appropiate to have
       | said here that NVIDIA and partners decided to add an adapter to
       | avoid suggesting that users need a new PSU).
       | 
       | These adaptors are distributed by NVIDIA but build by a supplier.
       | Igor's recommendation is, I quote: "NVIDIA has to take its own
       | supplier to task here, and replacing the adapters in circulation
       | would actually be the least they could do.".
       | 
       | EDIT: This comment can be misunderstood as me speculating whether
       | the OP read the article or not. I am not speculating: the OP did
       | not read the article, which claims the opposite of what the OP
       | claims. The OP claims that 12V solutions are the issue, while the
       | article states that they are fine, and as proof shows that new
       | PSUs implement them correctly. In fact the _goal_ of the article
       | is to set the record straight about this, by precising that the
       | only problem is the quality of the adapter, not 12 V per se. So
       | this comment is not an speculation about whether the OP read the
       | article or not, but a response to set the record straight for
       | those who might read OPs comment only, but not the article (I
       | often come to HN for the comments more than the articles, so I'd
       | find such a comment helpful myself).
        
         | dylan604 wrote:
         | But does the inclusion of a shoddy adapter not just encourage
         | the usage of the older PSU?
        
           | cptskippy wrote:
           | The problem isn't with older PSUs, they can work fine with a
           | good adapter.
           | 
           | The problem is with the adapter design, it is not just one
           | bad choice but multiple layers of negligence that compound
           | the issue.
           | 
           | * The adapter has 6 pins all bridged together by thin
           | connections that can break.
           | 
           | * 4 heavy gauge wires are attached to those pins with a
           | surface mount solder joint. They're not through hole soldered
           | which would provide more contact AND far greater strength.
           | They're not crimped which would provide the best contact and
           | strength.
           | 
           | * There's no strain relief. So if you hold the cables close
           | to where they're soldered to the connector and flex it you
           | can easily break those surface mount solder joints.
           | 
           | * Because the 6 pins / 4 wires are all bridged
           | asymmetrically, some bridges have more current passing
           | through them and if they're fatigued or damaged they'll have
           | higher resistance. Higher resistance means more heat.
           | 
           | Overall it's just really poorly engineered on multiple
           | levels. It's diarrhea poured over-top an open face shit
           | sandwich.
        
             | KIFulgore wrote:
             | Same conclusion here. Pushing 500+ watts through surface-
             | mount solder joints is pure negligence.
        
           | maxwell86 wrote:
           | Yes. They should not have included any adapter at all.
           | 
           | Those with older PSUs should have had to make a decision
           | about whether they want an adapter or not, and then pick an
           | adapter of the appropiate quality.
        
         | [deleted]
        
         | sudosysgen wrote:
         | Why would I waste a perfectly good PSU just because Nvidia
         | can't make adapters that don't melt?
        
           | maxwell86 wrote:
           | TBH NVIDIA should just have not included any adapter at all.
        
         | GekkePrutser wrote:
         | "Skim on a new PSU" sounds like people are cheaping out or
         | something. Many people already have a more than sufficient PSU
         | and replacing it just for another plug is a waste of natural
         | resources.
         | 
         | NVidia should just include an adaptor that's not a fire hazard.
         | The consumers are not to blame here.
         | 
         | Ps I think you mean "skimp"
        
           | maxwell86 wrote:
           | Agreed, not sure what other words to use instead.
           | 
           | I think it would have been better for these cards to not have
           | an adapter at all. I've added an EDIT to try to word this
           | differently.
        
         | Sohcahtoa82 wrote:
         | Other commenters are claiming that NVIDIA _KNEW_ the adapter
         | had an issue with melting and /or catching fire. If that's
         | true, I still think NVIDIA still has 100% liability.
         | 
         | If it was late in the development cycle that this was
         | discovered, then the proper thing to do would have been to
         | delay the release, or just not include adapters and offer them
         | later. It would have been a minor PR hit, but not nearly as bad
         | as shipping adapters known to be faulty.
        
           | maxwell86 wrote:
           | > Other commenters are claiming that NVIDIA KNEW the adapter
           | h
           | 
           | Those refer to a PCI Express Forum issue that was opened
           | about the connector drawing too much power, not the adapter.
        
         | pvg wrote:
         | _I 'd recommend people gloating to read the article._
         | 
         | This is a neat way to parallelize and scale up this swipe but
         | it's still the same swipe:
         | 
         |  _Please don 't comment on whether someone read an article.
         | "Did you even read the article? It mentions that" can be
         | shortened to "The article mentions that."_
         | 
         | https://news.ycombinator.com/newsguidelines.html
        
           | maxwell86 wrote:
           | Thanks. I've added an EDIT to clarify that I am not
           | speculating about whether the OP read the article, and that
           | the only point of my comment is to set the record straight
           | for those who often just come to HN for the comments (like
           | myself).
        
             | pvg wrote:
             | Just take it out. You don't know who has and hasn't read
             | the article, how much they are 'gloating' and it's one of
             | the oldest known bad tropes of internet forums which is why
             | it's in the guidelines. Your comment only gets better
             | without it.
        
       | RedShift1 wrote:
       | These RTX4090 cards are absolutely gigantic, why would an
       | engineer choose such a small power connector? There's plenty of
       | room for largers connectors or multiple connectors.
        
         | fooey wrote:
         | the actual PCB is not overly big, it's the cooler that's
         | massive
        
         | Saris wrote:
         | Or at least have current sensing and thermal sensing inside the
         | card on each supply rail at the connector header, so it can
         | fault if something goes wrong.
        
       | dheera wrote:
       | Why don't they put thermistors next to each connector and detect
       | conditions like this early? $1 addition to a $10000 device.
        
         | kube-system wrote:
         | If you correctly predicted that your connector would overheat,
         | the obvious solution is not to detect the issue but to fix the
         | design.
        
         | hypertele-Xii wrote:
         | Greed.
        
       | dsign wrote:
       | I'm going to be the dissenting voice here.
       | 
       | The connector is shoddy, and it's good entertainment value for me
       | personally who can't afford the card.
       | 
       | But that beast of a card, despite all the amps and watts it
       | swallows, is a very affordable tool for computational tinkerers--
       | think people working in their garage in the next molecular
       | simulation software or AI application--not to mention graphical
       | artists and architects. True, you can get more computational
       | power by paying for cloud services, but a cloud workstation with
       | 32 Gigabytes of RAM _without_ any GPU will cost you 1700 USD /mo
       | in AWS, and you will have to connect through a RDP interface that
       | just hurts.
       | 
       | So, back to the connector, I hope Nvidia and third party
       | manufacturers solve it. I will wait to burn the bridges until the
       | day when they decide they won't sell the cards anymore and
       | instead rent them at the same price that Amazon does. We don't
       | need another Adobe.
        
         | Retric wrote:
         | A fire hazard is a major concern no matter what the card can or
         | can't do.
         | 
         | Anyway, for home tinkering the 4090 is generally a minor
         | improvement over a 4080 or even vastly cheaper cards. More is
         | better, but not always by very much.
         | 
         | That said, the real value of cloud services is you can
         | sometimes get more done with 2TB of RAM for 4 hours than you
         | can with 32GB of RAM for 4 months. All without the need to have
         | that much computing power anywhere near you.
        
           | dsign wrote:
           | > That said, the real value of cloud services is you can
           | sometimes get more done with 2TB of RAM for 4 hours than you
           | can with 32GB of RAM for 4 months. All without the need to
           | have that much computing power anywhere near you.
           | 
           | I know that. Alas, I can't get more programming done with 2TB
           | of RAM in 4 hours than I can get with 32GB in three years.
           | Human limitations.
        
       | bromuk wrote:
       | EVGA breaking away from working with NVIDIA was a warning sign of
       | things to come.
        
       | MBCook wrote:
       | Perhaps this wouldn't be a problem if the cards didn't need 1700W
       | (estimated).
       | 
       | Maybe this is not just a bad design, but physics trying to warn
       | us this path isn't smart.
        
         | TwoNineA wrote:
         | > Perhaps this wouldn't be a problem if the cards didn't need
         | 1700W (estimated).
         | 
         | [citation needed]
        
           | MBCook wrote:
           | That was supposed to be tongue-in-cheek, thus the
           | "(estimated)" part.
           | 
           | Hooray for the clarity of sarcasm in text. Sigh.
           | 
           | I know it's not that bad. A standard wall circuit can't
           | supply that much power. But I find the power levels graphics
           | cards have gotten to crazy and think that's what we should be
           | solving, not how to supply them more power without things
           | melting.
        
         | bcrosby95 wrote:
         | It's ok, I'm sure your comment won't be sarcasm in a generation
         | or two.
         | 
         | I'm looking forward to the day my PC uses more electricity than
         | my air conditioning. With the power of GeForce I might see that
         | day.
        
         | cmsj wrote:
         | That's an interesting estimate ;)
         | 
         | The 12VHPWR connector is actually rated for up to 600W.
        
         | rjh29 wrote:
         | 4090's TDP is 450W, and it demonstrably runs on 850W PSUs.
         | 
         | If the cables are rated high enough and the PSU is large
         | enough, there is no wattage that is "less valid" than any
         | other. The 3090 TI from the previous generation runs at 450W
         | too, it just uses 3x6+2 cables rather than nvidia's fancy
         | adapter.
        
       | riquito wrote:
       | EVGA decision [1] to stop selling NVIDIA cards looks every day
       | smarter
       | 
       | [1] https://news.ycombinator.com/item?id=32870677
        
         | abracadaniel wrote:
         | They reportedly made some prototype 4000 series cards, so I
         | wonder if they saw a problem in the spec and that was the final
         | straw.
        
           | smoldesu wrote:
           | That's what I'm thinking, too. EVGA seems like a pretty
           | reliable manufacturer in my experience, their bus-powered
           | 1050 Ti lasted me 7 years and still runs in my brother's PC,
           | to my knowledge. Wouldn't surprise me if Nvidia rejected some
           | board revision EVGA suggested, which led to the ensuing
           | fallout.
        
             | aidenn0 wrote:
             | I'm still looking for a good replacement to my EVGA bus-
             | powered 1050Ti. I just want a bus-powered GPU that can run
             | desktop applications. I have a separate one with a power
             | connector for ML stuff.
        
               | neurostimulant wrote:
               | GTX 1650 has 75 watt TDP and has low profile variant that
               | can fit small form factor case.
        
               | smoldesu wrote:
               | Are the new Intel GPUs bus powered? If not, they could
               | make a killing off of frugal Linux users by offering a
               | chip designed around the ~70w TDP of a PCI bus.
        
               | aidenn0 wrote:
               | I think the A380 is, but the only one I can find is
               | ASRock and is too tall to fit in my case.
        
               | Sohcahtoa82 wrote:
               | The Intel A380 is an absolute utter joke of a GPU and I
               | question why it even exists.
               | 
               | It's _barely_ an upgrade over your 1050 Ti.
               | 
               | I urge you to check out the Gamers Nexus benchmarking of
               | it before you consider buyinug it. https://youtu.be/La-
               | dcK4h4ZU
        
               | aidenn0 wrote:
               | I could potentially use it with Wayland, which I can't
               | reasonably do with the 1050 Ti.
        
               | smoldesu wrote:
               | If it uses the Intel GPU drivers, then it could bench
               | worse and still be an upgrade over the Nvidia card (at
               | least for Linux users).
        
               | Queue29 wrote:
               | I use a rx6400 to drive 2 4k displays, and it works
               | great. Out of the box Linux support, too.
        
               | aidenn0 wrote:
               | That was on my short-list to look at, good to have
               | another data-point. How is the Linux support? My most
               | recent video card experience with AMD is a HD 6350 and
               | that has very mediocre linux support (as became obvious
               | when KDE and Firefox both started using GPU assisted
               | rendering by default)
        
               | Queue29 wrote:
               | AMD open sourced their drivers and maintains them
               | directly in the kernel now, ever since 2017. No issues at
               | all for me using Ubuntu 20.04 or 22.04, which uses
               | Wayland and GPU rendering by default.
        
         | xeonmc wrote:
         | On many levels, NVIDIA is the Riot Games of GPUs.
        
           | bolt7469 wrote:
           | I've played League of Legends for 10 years and have no idea
           | why Riot Games would be a good analogue for NVIDIA. Can you
           | elaborate?
        
             | postalrat wrote:
             | Maybe because so many people say the hate riot or nvidia
             | but can't really tell you why and continue to use their
             | products?
        
       ___________________________________________________________________
       (page generated 2022-10-28 23:01 UTC)